1
|
Huang CH, Chen CH, Tzeng JT, Chang AY, Fan CY, Sung CW, Lee CC, Huang EPC. The unreliability of crackles: insights from a breath sound study using physicians and artificial intelligence. NPJ Prim Care Respir Med 2024; 34:28. [PMID: 39406795 PMCID: PMC11480396 DOI: 10.1038/s41533-024-00392-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Accepted: 10/06/2024] [Indexed: 10/19/2024] Open
Abstract
BACKGROUND AND INTRODUCTION In comparison to other physical assessment methods, the inconsistency in respiratory evaluations continues to pose a major issue and challenge. OBJECTIVES This study aims to evaluate the difference in the identification ability of different breath sound. METHODS/DESCRIPTION In this prospective study, breath sounds from the Formosa Archive of Breath Sound were labeled by five physicians. Six artificial intelligence (AI) breath sound interpretation models were developed based on all labeled data and the labels from the five physicians, respectively. After labeling by AIs and physicians, labels with discrepancy were considered doubtful and relabeled by two additional physicians. The final labels were determined by a majority vote among the physicians. The capability of breath sound identification for humans and AI was evaluated using sensitivity, specificity and the area under the receiver-operating characteristic curve (AUROC). RESULTS/OUTCOME A total of 11,532 breath sound files were labeled, with 579 doubtful labels identified. After relabeling and exclusion, there were 305 labels with gold standard. For wheezing, both human physicians and the AI model demonstrated good sensitivities (89.5% vs. 86.0%) and good specificities (96.4% vs. 95.2%). For crackles, both human physicians and the AI model showed good sensitivities (93.9% vs. 80.3%) but poor specificities (56.6% vs. 65.9%). Lower AUROC values were noted in crackles identification for both physicians and the AI model compared to wheezing. CONCLUSION Even with the assistance of artificial intelligence tools, accurately identifying crackles compared to wheezing remains challenging. Consequently, crackles are unreliable for medical decision-making, and further examination is warranted.
Collapse
Affiliation(s)
- Chun-Hsiang Huang
- Department of Emergency Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu City, Taiwan, R.O.C
| | - Chi-Hsin Chen
- Department of Emergency Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu City, Taiwan, R.O.C
| | - Jing-Tong Tzeng
- College of Semiconductor Research, National Tsing Hua University, Hsinchu City, Taiwan, R.O.C
| | - An-Yan Chang
- Department of Electrical Engineering, National Tsing Hua University, Hsinchu City, Taiwan, R.O.C
| | - Cheng-Yi Fan
- Department of Emergency Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu City, Taiwan, R.O.C
| | - Chih-Wei Sung
- Department of Emergency Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu City, Taiwan, R.O.C
- Department of Emergency Medicine, College of Medicine, National Taiwan University, Taipei City, Taiwan, R.O.C
| | - Chi-Chun Lee
- College of Semiconductor Research, National Tsing Hua University, Hsinchu City, Taiwan, R.O.C..
- Department of Electrical Engineering, National Tsing Hua University, Hsinchu City, Taiwan, R.O.C..
| | - Edward Pei-Chuan Huang
- Department of Emergency Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu City, Taiwan, R.O.C..
- Department of Emergency Medicine, College of Medicine, National Taiwan University, Taipei City, Taiwan, R.O.C..
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei City, Taiwan, R.O.C..
| |
Collapse
|
2
|
Crisdayanti IAPA, Nam SW, Jung SK, Kim SE. Attention Feature Fusion Network via Knowledge Propagation for Automated Respiratory Sound Classification. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:383-392. [PMID: 38899013 PMCID: PMC11186653 DOI: 10.1109/ojemb.2024.3402139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 04/24/2024] [Accepted: 05/13/2024] [Indexed: 06/21/2024] Open
Abstract
Goal: In light of the COVID-19 pandemic, the early diagnosis of respiratory diseases has become increasingly crucial. Traditional diagnostic methods such as computed tomography (CT) and magnetic resonance imaging (MRI), while accurate, often face accessibility challenges. Lung auscultation, a simpler alternative, is subjective and highly dependent on the clinician's expertise. The pandemic has further exacerbated these challenges by restricting face-to-face consultations. This study aims to overcome these limitations by developing an automated respiratory sound classification system using deep learning, facilitating remote and accurate diagnoses. Methods: We developed a deep convolutional neural network (CNN) model that utilizes spectrographic representations of respiratory sounds within an image classification framework. Our model is enhanced with attention feature fusion of low-to-high-level information based on a knowledge propagation mechanism to increase classification effectiveness. This novel approach was evaluated using the ICBHI benchmark dataset and a larger, self-collected Pediatric dataset comprising outpatient children aged 1 to 6 years. Results: The proposed CNN model with knowledge propagation demonstrated superior performance compared to existing state-of-the-art models. Specifically, our model showed higher sensitivity in detecting abnormalities in the Pediatric dataset, indicating its potential for improving the accuracy of respiratory disease diagnosis. Conclusions: The integration of a knowledge propagation mechanism into a CNN model marks a significant advancement in the field of automated diagnosis of respiratory disease. This study paves the way for more accessible and precise healthcare solutions, which is especially crucial in pandemic scenarios.
Collapse
Affiliation(s)
- Ida A. P. A. Crisdayanti
- Department of Applied Artificial IntelligenceSeoul National University of Science and TechnologySeoul01811South Korea
| | - Sung Woo Nam
- Woorisoa Children's HospitalSeoul08291South Korea
| | | | - Seong-Eun Kim
- Department of Applied Artificial IntelligenceSeoul National University of Science and TechnologySeoul01811South Korea
| |
Collapse
|
3
|
Sabry AH, I. Dallal Bashi O, Nik Ali N, Mahmood Al Kubaisi Y. Lung disease recognition methods using audio-based analysis with machine learning. Heliyon 2024; 10:e26218. [PMID: 38420389 PMCID: PMC10900411 DOI: 10.1016/j.heliyon.2024.e26218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/11/2023] [Accepted: 02/08/2024] [Indexed: 03/02/2024] Open
Abstract
The use of computer-based automated approaches and improvements in lung sound recording techniques have made lung sound-based diagnostics even better and devoid of subjectivity errors. Using a computer to evaluate lung sound features more thoroughly with the use of analyzing changes in lung sound behavior, recording measurements, suppressing the presence of noise contaminations, and graphical representations are all made possible by computer-based lung sound analysis. This paper starts with a discussion of the need for this research area, providing an overview of the field and the motivations behind it. Following that, it details the survey methodology used in this work. It presents a discussion on the elements of sound-based lung disease classification using machine learning algorithms. This includes commonly prior considered datasets, feature extraction techniques, pre-processing methods, artifact removal methods, lung-heart sound separation, deep learning algorithms, and wavelet transform of lung audio signals. The study introduces studies that review lung screening including a summary table of these references and discusses the literature gaps in the existing studies. It is concluded that the use of sound-based machine learning in the classification of respiratory diseases has promising results. While we believe this material will prove valuable to physicians and researchers exploring sound-signal-based machine learning, large-scale investigations remain essential to solidify the findings and foster wider adoption within the medical community.
Collapse
Affiliation(s)
- Ahmad H. Sabry
- Department of Medical Instrumentation Engineering Techniques, Shatt Al-Arab University College, Basra, Iraq
| | - Omar I. Dallal Bashi
- Medical Technical Institute, Northern Technical University, 95G2+P34, Mosul, 41002, Iraq
| | - N.H. Nik Ali
- School of Electrical Engineering, College of Engineering, Universiti Teknologi MARA, 40450 Shah Alam, Selangor, Malaysia
| | - Yasir Mahmood Al Kubaisi
- Department of Sustainability Management, Dubai Academic Health Corporation, Dubai, 4545, United Arab Emirates
| |
Collapse
|
4
|
Sfayyih AH, Sulaiman N, Sabry AH. A review on lung disease recognition by acoustic signal analysis with deep learning networks. JOURNAL OF BIG DATA 2023; 10:101. [PMID: 37333945 PMCID: PMC10259357 DOI: 10.1186/s40537-023-00762-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 05/08/2023] [Indexed: 06/20/2023]
Abstract
Recently, assistive explanations for difficulties in the health check area have been made viable thanks in considerable portion to technologies like deep learning and machine learning. Using auditory analysis and medical imaging, they also increase the predictive accuracy for prompt and early disease detection. Medical professionals are thankful for such technological support since it helps them manage further patients because of the shortage of skilled human resources. In addition to serious illnesses like lung cancer and respiratory diseases, the plurality of breathing difficulties is gradually rising and endangering society. Because early prediction and immediate treatment are crucial for respiratory disorders, chest X-rays and respiratory sound audio are proving to be quite helpful together. Compared to related review studies on lung disease classification/detection using deep learning algorithms, only two review studies based on signal analysis for lung disease diagnosis have been conducted in 2011 and 2018. This work provides a review of lung disease recognition with acoustic signal analysis with deep learning networks. We anticipate that physicians and researchers working with sound-signal-based machine learning will find this material beneficial.
Collapse
Affiliation(s)
- Alyaa Hamel Sfayyih
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Malaysia
| | - Nasri Sulaiman
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Malaysia
| | - Ahmad H. Sabry
- Department of Computer Engineering, Al-Nahrain University, Al Jadriyah Bridge, 64074 Baghdad, Iraq
| |
Collapse
|
5
|
Sfayyih AH, Sabry AH, Jameel SM, Sulaiman N, Raafat SM, Humaidi AJ, Kubaiaisi YMA. Acoustic-Based Deep Learning Architectures for Lung Disease Diagnosis: A Comprehensive Overview. Diagnostics (Basel) 2023; 13:diagnostics13101748. [PMID: 37238233 DOI: 10.3390/diagnostics13101748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 05/04/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023] Open
Abstract
Lung auscultation has long been used as a valuable medical tool to assess respiratory health and has gotten a lot of attention in recent years, notably following the coronavirus epidemic. Lung auscultation is used to assess a patient's respiratory role. Modern technological progress has guided the growth of computer-based respiratory speech investigation, a valuable tool for detecting lung abnormalities and diseases. Several recent studies have reviewed this important area, but none are specific to lung sound-based analysis with deep-learning architectures from one side and the provided information was not sufficient for a good understanding of these techniques. This paper gives a complete review of prior deep-learning-based architecture lung sound analysis. Deep-learning-based respiratory sound analysis articles are found in different databases including the Plos, ACM Digital Libraries, Elsevier, PubMed, MDPI, Springer, and IEEE. More than 160 publications were extracted and submitted for assessment. This paper discusses different trends in pathology/lung sound, the common features for classifying lung sounds, several considered datasets, classification methods, signal processing techniques, and some statistical information based on previous study findings. Finally, the assessment concludes with a discussion of potential future improvements and recommendations.
Collapse
Affiliation(s)
- Alyaa Hamel Sfayyih
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University Putra Malaysia, Serdang 43400, Malaysia
| | - Ahmad H Sabry
- Department of Computer Engineering, Al-Nahrain University Al Jadriyah Bridge, Baghdad 64074, Iraq
| | | | - Nasri Sulaiman
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University Putra Malaysia, Serdang 43400, Malaysia
| | - Safanah Mudheher Raafat
- Department of Control and Systems Engineering, University of Technology, Baghdad 10011, Iraq
| | - Amjad J Humaidi
- Department of Control and Systems Engineering, University of Technology, Baghdad 10011, Iraq
| | - Yasir Mahmood Al Kubaiaisi
- Department of Sustainability Management, Dubai Academic Health Corporation, Dubai 4545, United Arab Emirates
| |
Collapse
|
6
|
Song W, Han J. Patch-level contrastive embedding learning for respiratory sound classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
7
|
Albiges T, Sabeur Z, Arbab-Zavar B. Compressed Sensing Data with Performing Audio Signal Reconstruction for the Intelligent Classification of Chronic Respiratory Diseases. SENSORS (BASEL, SWITZERLAND) 2023; 23:1439. [PMID: 36772480 PMCID: PMC9921371 DOI: 10.3390/s23031439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 01/23/2023] [Accepted: 01/25/2023] [Indexed: 06/18/2023]
Abstract
Chronic obstructive pulmonary disease (COPD) concerns the serious decline of human lung functions. These have emerged as one of the most concerning health conditions over the last two decades, after cancer around the world. The early diagnosis of COPD, particularly of lung function degradation, together with monitoring the condition by physicians, and predicting the likelihood of exacerbation events in individual patients, remains an important challenge to overcome. The requirements for achieving scalable deployments of data-driven methods using artificial intelligence for meeting such a challenge in modern COPD healthcare have become of paramount and critical importance. In this study, we have established the experimental foundations for acquiring and indeed generating biomedical observation data, for good performance signal analysis and machine learning that will lead us to the intelligent diagnosis and monitoring of COPD conditions for individual patients. Further, we investigated on the multi-resolution analysis and compression of lung audio signals, while we performed their machine classification under two distinct experiments. These respectively refer to conditions involving (1) "Healthy" or "COPD" and (2) "Healthy", "COPD", or "Pneumonia" classes. Signal reconstruction with the extracted features for machine learning and testing was also performed for securing the integrity of the original audio recordings. These showed high levels of accuracy together with the performances of the selected machine learning-based classifiers using diverse metrics. Our study shows promising levels of accuracy in classifying Healthy and COPD and also Healthy, COPD, and Pneumonia conditions. Further work in this study will be imminently extended to new experiments using multi-modal sensing hardware and data fusion techniques for the development of the next generation diagnosis systems for COPD healthcare of the future.
Collapse
Affiliation(s)
| | - Zoheir Sabeur
- Department of Computing and Informatics, Bournemouth University, Bournemouth BH12 5BB, UK
| | | |
Collapse
|
8
|
Zhang Q, Zhang J, Yuan J, Huang H, Zhang Y, Zhang B, Lv G, Lin S, Wang N, Liu X, Tang M, Wang Y, Ma H, Liu L, Yuan S, Zhou H, Zhao J, Li Y, Yin Y, Zhao L, Wang G, Lian Y. SPRSound: Open-Source SJTU Paediatric Respiratory Sound Database. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:867-881. [PMID: 36070274 DOI: 10.1109/tbcas.2022.3204910] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
It has proved that the auscultation of respiratory sound has advantage in early respiratory diagnosis. Various methods have been raised to perform automatic respiratory sound analysis to reduce subjective diagnosis and physicians' workload. However, these methods highly rely on the quality of respiratory sound database. In this work, we have developed the first open-access paediatric respiratory sound database, SPRSound. The database consists of 2,683 records and 9,089 respiratory sound events from 292 participants. Accurate label is important to achieve a good prediction for adventitious respiratory sound classification problem. A custom-made sound label annotation software (SoundAnn) has been developed to perform sound editing, sound annotation, and quality assurance evaluation. A team of 11 experienced paediatric physicians is involved in the entire process to establish golden standard reference for the dataset. To verify the robustness and accuracy of the classification model, we have investigated the effects of different feature extraction methods and machine learning classifiers on the classification performance of our dataset. As such, we have achieved a score of 75.22%, 61.57%, 56.71%, and 37.84% for the four different classification challenges at the event level and record level.
Collapse
|
9
|
Pham L, Ngo D, Tran K, Hoang T, Schindler A, McLoughlin I. An Ensemble of Deep Learning Frameworks for Predicting Respiratory Anomalies. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:4595-4598. [PMID: 36086440 DOI: 10.1109/embc48229.2022.9871440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
This paper evaluates a range of deep learning frameworks for detecting respiratory anomalies from input audio. Audio recordings of respiratory cycles collected from patients are transformed into time-frequency spectrograms to serve as front-end two-dimensional features. Cropped spectrogram segments are then used to train a range of back-end deep learning networks to classify respiratory cycles into predefined medically-relevant categories. A set of those trained high-performance deep learning frameworks are then fused to obtain the best score. Our experiments on the ICBHI benchmark dataset achieve the highest ICBHI score to date of 57.3%. This is derived from a late fusion of inception based and transfer learning based deep learning frameworks, easily outperforming other state-of-the-art systems. Clinical relevance--- Respiratory disease, wheeze, crackle, inception, convolutional neural network, transfer learning.
Collapse
|
10
|
Pham Thi Viet H, Nguyen Thi Ngoc H, Tran Anh V, Hoang Quang H. Classification of lung sounds using scalogram representation of sound segments and convolutional neural network. J Med Eng Technol 2022; 46:270-279. [PMID: 35212591 DOI: 10.1080/03091902.2022.2040624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Lung auscultation is one of the most common methods for screening of lung diseases. The increasingly high rate of respiratory diseases leads to the need for robust methods to detect the abnormalities in patients' breathing sounds. Lung sounds analysis stands out as a promising approach to automatic screening of lung diseases, serving as a second opinion for doctors as a stand-alone device for preliminary screening of lung diseases in remote areas. In previous research on lung classification using ICBHI Database on Kaggle, lung audios are converted to spectral images and fed into deep neural networks for training. There are a few studies which uses the scalogram, however they focussed on classification among different lung diseases. The use of scalograms in categorising the sound types are rarely used. In this paper, we combined scalograms and neural networks for classification of lung sound types. Padding methods and augmentation are also considered to evaluate the impacts on classification score. An ensemble learning is incorporated to increase classification accuracy by utilising voting of many models. The model trained and evaluated has shown prominent improvement of this method on classification on the benchmark ICBHI database.
Collapse
Affiliation(s)
| | - Huyen Nguyen Thi Ngoc
- School of Electronics and Telecommunications, Hanoi University of Science and Technology, Hanoi, Vietnam
| | - Vu Tran Anh
- School of Electronics and Telecommunications, Hanoi University of Science and Technology, Hanoi, Vietnam
| | - Huy Hoang Quang
- School of Electronics and Telecommunications, Hanoi University of Science and Technology, Hanoi, Vietnam
| |
Collapse
|
11
|
Nguyen T, Pernkopf F. Lung Sound Classification Using Co-tuning and Stochastic Normalization. IEEE Trans Biomed Eng 2022; 69:2872-2882. [PMID: 35254969 DOI: 10.1109/tbme.2022.3156293] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Computational methods for lung sound analysis are beneficial for computer-aided diagnosis support, storage and monitoring in critical care. In this paper, we use pre-trained ResNet models as backbone architectures for classification of adventitious lung sounds and respiratory diseases. The learned representation of the pre-trained model is transferred by using vanilla fine-tuning, co-tuning, stochastic normalization and the combination of the co-tuning and stochastic normalization techniques. Furthermore, data augmentation in both time domain and time-frequency domain is used to account for the class imbalance of the ICBHI and our multi-channel lung sound dataset. Additionally, we introduce spectrum correction to account for the variations of the recording device properties on the ICBHI dataset. Empirically, our proposed systems mostly outperform all state-of-the-art lung sound classification systems for the adventitious lung sounds and respiratory diseases of both datasets.
Collapse
|
12
|
Petmezas G, Cheimariotis GA, Stefanopoulos L, Rocha B, Paiva RP, Katsaggelos AK, Maglaveras N. Automated Lung Sound Classification Using a Hybrid CNN-LSTM Network and Focal Loss Function. SENSORS (BASEL, SWITZERLAND) 2022; 22:1232. [PMID: 35161977 PMCID: PMC8838187 DOI: 10.3390/s22031232] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 02/02/2022] [Accepted: 02/03/2022] [Indexed: 11/16/2022]
Abstract
Respiratory diseases constitute one of the leading causes of death worldwide and directly affect the patient's quality of life. Early diagnosis and patient monitoring, which conventionally include lung auscultation, are essential for the efficient management of respiratory diseases. Manual lung sound interpretation is a subjective and time-consuming process that requires high medical expertise. The capabilities that deep learning offers could be exploited in order that robust lung sound classification models can be designed. In this paper, we propose a novel hybrid neural model that implements the focal loss (FL) function to deal with training data imbalance. Features initially extracted from short-time Fourier transform (STFT) spectrograms via a convolutional neural network (CNN) are given as input to a long short-term memory (LSTM) network that memorizes the temporal dependencies between data and classifies four types of lung sounds, including normal, crackles, wheezes, and both crackles and wheezes. The model was trained and tested on the ICBHI 2017 Respiratory Sound Database and achieved state-of-the-art results using three different data splitting strategies-namely, sensitivity 47.37%, specificity 82.46%, score 64.92% and accuracy 73.69% for the official 60/40 split, sensitivity 52.78%, specificity 84.26%, score 68.52% and accuracy 76.39% using interpatient 10-fold cross validation, and sensitivity 60.29% and accuracy 74.57% using leave-one-out cross validation.
Collapse
Affiliation(s)
- Georgios Petmezas
- Laboratory of Computing, Medical Informatics and Biomedical—Imaging Technologies, Medical School, Aristotle University of Thessaloniki, GR 54124 Thessaloniki, Greece; (G.P.); (G.-A.C.); (L.S.)
| | - Grigorios-Aris Cheimariotis
- Laboratory of Computing, Medical Informatics and Biomedical—Imaging Technologies, Medical School, Aristotle University of Thessaloniki, GR 54124 Thessaloniki, Greece; (G.P.); (G.-A.C.); (L.S.)
| | - Leandros Stefanopoulos
- Laboratory of Computing, Medical Informatics and Biomedical—Imaging Technologies, Medical School, Aristotle University of Thessaloniki, GR 54124 Thessaloniki, Greece; (G.P.); (G.-A.C.); (L.S.)
| | - Bruno Rocha
- Centre for Informatics and Systems, Department of Informatics Engineering, University of Coimbra, 3030-290 Coimbra, Portugal; (B.R.); (R.P.P.)
| | - Rui Pedro Paiva
- Centre for Informatics and Systems, Department of Informatics Engineering, University of Coimbra, 3030-290 Coimbra, Portugal; (B.R.); (R.P.P.)
| | - Aggelos K. Katsaggelos
- Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208, USA;
| | - Nicos Maglaveras
- Laboratory of Computing, Medical Informatics and Biomedical—Imaging Technologies, Medical School, Aristotle University of Thessaloniki, GR 54124 Thessaloniki, Greece; (G.P.); (G.-A.C.); (L.S.)
| |
Collapse
|
13
|
Uyttendaele V, Guiot J, Chase JG, Desaive T. Does Facemask Impact Diagnostic During Pulmonary Auscultation? IFAC-PAPERSONLINE 2021; 54:192-197. [PMID: 38621011 PMCID: PMC8562133 DOI: 10.1016/j.ifacol.2021.10.254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Facemasks have been widely used in hospitals, especially since the emergence of the coronavirus 2019 (COVID-19) pandemic, often severely affecting respiratory functions. Masks protect patients from contagious airborne transmission, and are thus more specifically important for chronic respiratory disease (CRD) patients. However, masks also increase air resistance and thus work of breathing, which may impact pulmonary auscultation and diagnostic acuity, the primary respiratory examination. This study is the first to assess the impact of facemasks on clinical auscultation diagnostic. Lung sounds from 29 patients were digitally recorded using an electronic stethoscope. For each patient, one recording was taken wearing a surgical mask and one without. Recorded signals were segmented in breath cycles using an autocorrelation algorithm. In total, 87 breath cycles were identified from sounds with mask, and 82 without mask. Time-frequency analysis of the signals was used to extract comparison features such as peak frequency, median frequency, band power, or spectral integration. All the features extracted in frequency content, its evolution, or power did not significantly differ between respiratory cycles with or without mask. This early stage study thus suggests minor impact on clinical diagnostic outcomes in pulmonary auscultation. However, further analysis is necessary such as on adventitious sounds characteristics differences with or without mask, to determine if facemask could lead to no discernible diagnostic outcome in clinical practice.
Collapse
Affiliation(s)
| | - Julien Guiot
- Department of Pneumology, University Hospital of Liège, Belgium
| | - J Geoffrey Chase
- Department of Mechanical Engineering, University of Canterbury, Christchurch, New Zealand
| | | |
Collapse
|
14
|
Gairola S, Tom F, Kwatra N, Jain M. RespireNet: A Deep Neural Network for Accurately Detecting Abnormal Lung Sounds in Limited Data Setting. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:527-530. [PMID: 34891348 DOI: 10.1109/embc46164.2021.9630091] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Auscultation of respiratory sounds is the primary tool for screening and diagnosing lung diseases. Automated analysis, coupled with digital stethoscopes, can play a crucial role in enabling tele-screening of fatal lung diseases. Deep neural networks (DNNs) have shown potential to solve such problems, and are an obvious choice. However, DNNs are data hungry, and the largest respiratory dataset ICBHI has only 6898 breathing cycles, which is quite small for training a satisfactory DNN model. In this work, RespireNet, we propose a simple CNN-based model, along with a suite of novel techniques- device specific fine-tuning, concatenation-based augmentation, blank region clipping, and smart padding-enabling us to efficiently use the small-sized dataset. We perform extensive evaluation on the ICBHI dataset, and improve upon the state-of-the-art results for 4-class classification by 2.2%.Code: https://github.com/microsoft/RespireNet.
Collapse
|
15
|
Pham L, Phan H, Schindler A, King R, Mertins A, McLoughlin I. Inception-Based Network and Multi-Spectrogram Ensemble Applied To Predict Respiratory Anomalies and Lung Diseases. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:253-256. [PMID: 34891284 DOI: 10.1109/embc46164.2021.9629857] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper presents an inception-based deep neural network for detecting lung diseases using respiratory sound input. Recordings of respiratory sound collected from patients are first transformed into spectrograms where both spectral and temporal information are well represented, in a process referred to as front-end feature extraction. These spectrograms are then fed into the proposed network, in a process referred to as back-end classification, for detecting whether patients suffer from lung-related diseases. Our experiments, conducted over the ICBHI benchmark metadataset of respiratory sound, achieve competitive ICBHI scores of 0.53/0.45 and 0.87/0.85 regarding respiratory anomaly and disease detection, respectively.
Collapse
|
16
|
Pham L, Phan H, Palaniappan R, Mertins A, McLoughlin I. CNN-MoE Based Framework for Classification of Respiratory Anomalies and Lung Disease Detection. IEEE J Biomed Health Inform 2021; 25:2938-2947. [PMID: 33684048 DOI: 10.1109/jbhi.2021.3064237] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents and explores a robust deep learning framework for auscultation analysis. This aims to classify anomalies in respiratory cycles and detect diseases, from respiratory sound recordings. The framework begins with front-end feature extraction that transforms input sound into a spectrogram representation. Then, a back-end deep learning network is used to classify the spectrogram features into categories of respiratory anomaly cycles or diseases. Experiments, conducted over the ICBHI benchmark dataset of respiratory sounds, confirm three main contributions towards respiratory-sound analysis. Firstly, we carry out an extensive exploration of the effect of spectrogram types, spectral-time resolution, overlapping/non-overlapping windows, and data augmentation on final prediction accuracy. This leads us to propose a novel deep learning system, built on the proposed framework, which outperforms current state-of-the-art methods. Finally, we apply a Teacher-Student scheme to achieve a trade-off between model performance and model complexity which holds promise for building real-time applications.
Collapse
|
17
|
Automatic acoustic identification of respiratory diseases. EVOLVING SYSTEMS 2021. [DOI: 10.1007/s12530-020-09339-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
18
|
Wu L, Li L. Investigating into segmentation methods for diagnosis of respiratory diseases using adventitious respiratory sounds. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:768-771. [PMID: 33018099 DOI: 10.1109/embc44109.2020.9175783] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Respiratory condition has received a great amount of attention nowadays since respiratory diseases recently become the globally leading causes of death. Traditionally, stethoscope is applied in early diagnosis but it requires clinician with extensive training experience to provide accurate diagnosis. Accordingly, a subjective and fast diagnosing solution of respiratory diseases is highly demanded. Adventitious respiratory sounds (ARSs), such as crackle, are mainly concerned during diagnosis since they are indication of various respiratory diseases. Therefore, the characteristics of crackle are informative and valuable regarding to develop a computerised approach for pathology-based diagnosis. In this work, we propose a framework combining random forest classifier and Empirical Mode Decomposition (EMD) method focusing on a multi-classification task of identifying subjects in 6 respiratory conditions (healthy, bronchiectasis, bronchiolitis, COPD, pneumonia and URTI). Specifically, 14 combinations of respiratory sound segments were compared and we found segmentation plays an important role in classifying different respiratory conditions. The classifier with best performance (accuracy = 0.88, precision = 0.91, recall = 0.87, specificity = 0.91, F1-score = 0.81) was trained with features extracted from the combination of early inspiratory phase and entire inspiratory phase. To our best knowledge, we are the first to address the challenging multi-classification problem.
Collapse
|
19
|
Pham L, McLoughlin I, Phan H, Tran M, Nguyen T, Palaniappan R. Robust Deep Learning Framework For Predicting Respiratory Anomalies and Diseases. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:164-167. [PMID: 33017955 DOI: 10.1109/embc44109.2020.9175704] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents a robust deep learning framework developed to detect respiratory diseases from recordings of respiratory sounds. The complete detection process firstly involves front end feature extraction where recordings are transformed into spectrograms that convey both spectral and temporal information. Then a back-end deep learning model classifies the features into classes of respiratory disease or anomaly. Experiments, conducted over the ICBHI benchmark dataset of respiratory sounds, evaluate the ability of the framework to classify sounds. Two main contributions are made in this paper. Firstly, we provide an extensive analysis of how factors such as respiratory cycle length, time resolution, and network architecture, affect final prediction accuracy. Secondly, a novel deep learning based framework is proposed for detection of respiratory diseases and shown to perform extremely well compared to state of the art methods.
Collapse
|
20
|
Demir F, Sengur A, Bajaj V. Convolutional neural networks based efficient approach for classification of lung diseases. Health Inf Sci Syst 2019; 8:4. [PMID: 31915523 PMCID: PMC6928168 DOI: 10.1007/s13755-019-0091-3] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Accepted: 10/28/2019] [Indexed: 11/30/2022] Open
Abstract
Treatment of lung diseases, which are the third most common cause of death in the world, is of great importance in the medical field. Many studies using lung sounds recorded with stethoscope have been conducted in the literature in order to diagnose the lung diseases with artificial intelligence-compatible devices and to assist the experts in their diagnosis. In this paper, ICBHI 2017 database which includes different sample frequencies, noise and background sounds was used for the classification of lung sounds. The lung sound signals were initially converted to spectrogram images by using time–frequency method. The short time Fourier transform (STFT) method was considered as time–frequency transformation. Two deep learning based approaches were used for lung sound classification. In the first approach, a pre-trained deep convolutional neural networks (CNN) model was used for feature extraction and a support vector machine (SVM) classifier was used in classification of the lung sounds. In the second approach, the pre-trained deep CNN model was fine-tuned (transfer learning) via spectrogram images for lung sound classification. The accuracies of the proposed methods were tested by using the ten-fold cross validation. The accuracies for the first and second proposed methods were 65.5% and 63.09%, respectively. The obtained accuracies were then compared with some of the existing results and it was seen that obtained scores were better than the other results.
Collapse
Affiliation(s)
- Fatih Demir
- 1Electrical and Electronics Engineering Dept., Technology Faculty, Firat University, Elazig, Turkey
| | - Abdulkadir Sengur
- 1Electrical and Electronics Engineering Dept., Technology Faculty, Firat University, Elazig, Turkey
| | | |
Collapse
|
21
|
Rocha BM, Filos D, Mendes L, Serbes G, Ulukaya S, Kahya YP, Jakovljevic N, Turukalo TL, Vogiatzis IM, Perantoni E, Kaimakamis E, Natsiavas P, Oliveira A, Jácome C, Marques A, Maglaveras N, Pedro Paiva R, Chouvarda I, de Carvalho P. An open access database for the evaluation of respiratory sound classification algorithms. Physiol Meas 2019; 40:035001. [PMID: 30708353 DOI: 10.1088/1361-6579/ab03ea] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Over the last few decades, there has been significant interest in the automatic analysis of respiratory sounds. However, currently there are no publicly available large databases with which new algorithms can be evaluated and compared. Further developments in the field are dependent on the creation of such databases. APPROACH This paper describes a public respiratory sound database, which was compiled for an international competition, the first scientific challenge of the IFMBE's International Conference on Biomedical and Health Informatics. The database includes 920 recordings acquired from 126 participants and two sets of annotations. One set contains 6898 annotated respiratory cycles, some including crackles, wheezes, or a combination of both, and some with no adventitious respiratory sounds. In the other set, precise locations of 10 775 events of crackles and wheezes were annotated. MAIN RESULTS The best system that participated in the challenge achieved an average score of 52.5% with the respiratory cycle annotations and an average score of 91.2% with the event annotations. SIGNIFICANCE The creation and public release of this database will be useful to the research community and could bring attention to the respiratory sound classification problem.
Collapse
Affiliation(s)
- Bruno M Rocha
- Department of Informatics Engineering, Centre for Informatics and Systems (CISUC), University of Coimbra, Coimbra, Portugal. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|