51
|
Alqudaihi KS, Aslam N, Khan IU, Almuhaideb AM, Alsunaidi SJ, Ibrahim NMAR, Alhaidari FA, Shaikh FS, Alsenbel YM, Alalharith DM, Alharthi HM, Alghamdi WM, Alshahrani MS. Cough Sound Detection and Diagnosis Using Artificial Intelligence Techniques: Challenges and Opportunities. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:102327-102344. [PMID: 34786317 PMCID: PMC8545201 DOI: 10.1109/access.2021.3097559] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 07/09/2021] [Indexed: 06/02/2023]
Abstract
Coughing is a common symptom of several respiratory diseases. The sound and type of cough are useful features to consider when diagnosing a disease. Respiratory infections pose a significant risk to human lives worldwide as well as a significant economic downturn, particularly in countries with limited therapeutic resources. In this study we reviewed the latest proposed technologies that were used to control the impact of respiratory diseases. Artificial Intelligence (AI) is a promising technology that aids in data analysis and prediction of results, thereby ensuring people's well-being. We conveyed that the cough symptom can be reliably used by AI algorithms to detect and diagnose different types of known diseases including pneumonia, pulmonary edema, asthma, tuberculosis (TB), COVID19, pertussis, and other respiratory diseases. We also identified different techniques that produced the best results for diagnosing respiratory disease using cough samples. This study presents the most recent challenges, solutions, and opportunities in respiratory disease detection and diagnosis, allowing practitioners and researchers to develop better techniques.
Collapse
Affiliation(s)
- Kawther S. Alqudaihi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Nida Aslam
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Irfan Ullah Khan
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Abdullah M. Almuhaideb
- Department of Networks and CommunicationsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Shikah J. Alsunaidi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Nehad M. Abdel Rahman Ibrahim
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Fahd A. Alhaidari
- Department of Networks and CommunicationsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Fatema S. Shaikh
- Department of Computer Information SystemsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Yasmine M. Alsenbel
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Dima M. Alalharith
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Hajar M. Alharthi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Wejdan M. Alghamdi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Mohammed S. Alshahrani
- Department of Emergency MedicineCollege of MedicineImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| |
Collapse
|
52
|
Hsu FS, Huang SR, Huang CW, Huang CJ, Cheng YR, Chen CC, Hsiao J, Chen CW, Chen LC, Lai YC, Hsu BF, Lin NJ, Tsai WL, Wu YL, Tseng TL, Tseng CT, Chen YT, Lai F. Benchmarking of eight recurrent neural network variants for breath phase and adventitious sound detection on a self-developed open-access lung sound database-HF_Lung_V1. PLoS One 2021; 16:e0254134. [PMID: 34197556 PMCID: PMC8248710 DOI: 10.1371/journal.pone.0254134] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/20/2021] [Indexed: 01/15/2023] Open
Abstract
A reliable, remote, and continuous real-time respiratory sound monitor with automated respiratory sound analysis ability is urgently required in many clinical scenarios-such as in monitoring disease progression of coronavirus disease 2019-to replace conventional auscultation with a handheld stethoscope. However, a robust computerized respiratory sound analysis algorithm for breath phase detection and adventitious sound detection at the recording level has not yet been validated in practical applications. In this study, we developed a lung sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds (duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels, 13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze labels, 686 stridor labels, and 4,740 rhonchus labels), and 15,606 discontinuous adventitious sound labels (all crackles). We conducted benchmark tests using long short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM (BiLSTM), bidirectional GRU (BiGRU), convolutional neural network (CNN)-LSTM, CNN-GRU, CNN-BiLSTM, and CNN-BiGRU models for breath phase detection and adventitious sound detection. We also conducted a performance comparison between the LSTM-based and GRU-based models, between unidirectional and bidirectional models, and between models with and without a CNN. The results revealed that these models exhibited adequate performance in lung sound analysis. The GRU-based models outperformed, in terms of F1 scores and areas under the receiver operating characteristic curves, the LSTM-based models in most of the defined tasks. Furthermore, all bidirectional models outperformed their unidirectional counterparts. Finally, the addition of a CNN improved the accuracy of lung sound analysis, especially in the CAS detection tasks.
Collapse
Affiliation(s)
- Fu-Shun Hsu
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
- Department of Critical Care Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | | | | | - Chao-Jung Huang
- Joint Research Center for Artificial Intelligence Technology and All Vista Healthcare, National Taiwan University, Taipei, Taiwan
| | - Yuan-Ren Cheng
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
- Department of Life Science, College of Life Science, National Taiwan University, Taipei, Taiwan
- Institute of Biomedical Sciences, Academia Sinica, Taipei, Taiwan
| | | | - Jack Hsiao
- HCC Healthcare Group, New Taipei, Taiwan
| | - Chung-Wei Chen
- Department of Critical Care Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Li-Chin Chen
- Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
| | - Yen-Chun Lai
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Bi-Fang Hsu
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Nian-Jhen Lin
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
- Division of Pulmonary Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Wan-Ling Tsai
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Yi-Lin Wu
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | | | | | - Yi-Tsun Chen
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
53
|
Gupta P, Wen H, Di Francesco L, Ayazi F. Detection of pathological mechano-acoustic signatures using precision accelerometer contact microphones in patients with pulmonary disorders. Sci Rep 2021; 11:13427. [PMID: 34183695 PMCID: PMC8238985 DOI: 10.1038/s41598-021-92666-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 06/11/2021] [Indexed: 11/09/2022] Open
Abstract
Monitoring pathological mechano-acoustic signals emanating from the lungs is critical for timely and cost-effective healthcare delivery. Adventitious lung sounds including crackles, wheezes, rhonchi, bronchial breath sounds, stridor or pleural rub and abnormal breathing patterns function as essential clinical biomarkers for the early identification, accurate diagnosis and monitoring of pulmonary disorders. Here, we present a wearable sensor module comprising of a hermetically encapsulated, high precision accelerometer contact microphone (ACM) which enables both episodic and longitudinal assessment of lung sounds, breathing patterns and respiratory rates using a single integrated sensor. This enhanced ACM sensor leverages a nano-gap transduction mechanism to achieve high sensitivity to weak high frequency vibrations occurring on the surface of the skin due to underlying lung pathologies. The performance of the ACM sensor was compared to recordings from a state-of-art digital stethoscope, and the efficacy of the developed system is demonstrated by conducting an exploratory research study aimed at recording pathological mechano-acoustic signals from hospitalized patients with a chronic obstructive pulmonary disease (COPD) exacerbation, pneumonia, and acute decompensated heart failure. This unobtrusive wearable system can enable both episodic and longitudinal evaluation of lung sounds that allow for the early detection and/or ongoing monitoring of pulmonary disease.
Collapse
Affiliation(s)
- Pranav Gupta
- Georgia Institute of Technology, Atlanta, GA, 30308, USA.
| | - Haoran Wen
- StethX Microsystems, Atlanta, GA, 30308, USA
| | - Lorenzo Di Francesco
- Department of Medicine, Division of General Internal Medicine, Emory University, Atlanta, GA, 30303, USA
| | - Farrokh Ayazi
- Ken Byers Professor in Microsystems, Georgia Institute of Technology, Atlanta, GA, 30308, USA.
| |
Collapse
|
54
|
Jung SY, Liao CH, Wu YS, Yuan SM, Sun CT. Efficiently Classifying Lung Sounds through Depthwise Separable CNN Models with Fused STFT and MFCC Features. Diagnostics (Basel) 2021; 11:732. [PMID: 33924146 PMCID: PMC8074359 DOI: 10.3390/diagnostics11040732] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 04/07/2021] [Accepted: 04/13/2021] [Indexed: 01/18/2023] Open
Abstract
Lung sounds remain vital in clinical diagnosis as they reveal associations with pulmonary pathologies. With COVID-19 spreading across the world, it has become more pressing for medical professionals to better leverage artificial intelligence for faster and more accurate lung auscultation. This research aims to propose a feature engineering process that extracts the dedicated features for the depthwise separable convolution neural network (DS-CNN) to classify lung sounds accurately and efficiently. We extracted a total of three features for the shrunk DS-CNN model: the short-time Fourier-transformed (STFT) feature, the Mel-frequency cepstrum coefficient (MFCC) feature, and the fused features of these two. We observed that while DS-CNN models trained on either the STFT or the MFCC feature achieved an accuracy of 82.27% and 73.02%, respectively, fusing both features led to a higher accuracy of 85.74%. In addition, our method achieved 16 times higher inference speed on an edge device and only 0.45% less accuracy than RespireNet. This finding indicates that the fusion of the STFT and MFCC features and DS-CNN would be a model design for lightweight edge devices to achieve accurate AI-aided detection of lung diseases.
Collapse
Affiliation(s)
- Shing-Yun Jung
- Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan; (C.-H.L.); (Y.-S.W.); (C.-T.S.)
| | - Chia-Hung Liao
- Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan; (C.-H.L.); (Y.-S.W.); (C.-T.S.)
| | - Yu-Sheng Wu
- Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan; (C.-H.L.); (Y.-S.W.); (C.-T.S.)
| | - Shyan-Ming Yuan
- Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan; (C.-H.L.); (Y.-S.W.); (C.-T.S.)
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu 300, Taiwan
| | - Chuen-Tsai Sun
- Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan; (C.-H.L.); (Y.-S.W.); (C.-T.S.)
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu 300, Taiwan
| |
Collapse
|
55
|
Pal R, Barney A. Iterative envelope mean fractal dimension filter for the separation of crackles from normal breath sounds. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
56
|
Automatic acoustic identification of respiratory diseases. EVOLVING SYSTEMS 2021. [DOI: 10.1007/s12530-020-09339-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
57
|
De La Torre Cruz J, Cañadas Quesada FJ, Ruiz Reyes N, García Galán S, Carabias Orti JJ, Peréz Chica G. Monophonic and Polyphonic Wheezing Classification Based on Constrained Low-Rank Non-Negative Matrix Factorization. SENSORS (BASEL, SWITZERLAND) 2021; 21:1661. [PMID: 33670892 PMCID: PMC7957792 DOI: 10.3390/s21051661] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 02/17/2021] [Accepted: 02/22/2021] [Indexed: 11/21/2022]
Abstract
The appearance of wheezing sounds is widely considered by physicians as a key indicator to detect early pulmonary disorders or even the severity associated with respiratory diseases, as occurs in the case of asthma and chronic obstructive pulmonary disease. From a physician's point of view, monophonic and polyphonic wheezing classification is still a challenging topic in biomedical signal processing since both types of wheezes are sinusoidal in nature. Unlike most of the classification algorithms in which interference caused by normal respiratory sounds is not addressed in depth, our first contribution proposes a novel Constrained Low-Rank Non-negative Matrix Factorization (CL-RNMF) approach, never applied to classification of wheezing as far as the authors' knowledge, which incorporates several constraints (sparseness and smoothness) and a low-rank configuration to extract the wheezing spectral content, minimizing the acoustic interference from normal respiratory sounds. The second contribution automatically analyzes the harmonic structure of the energy distribution associated with the estimated wheezing spectrogram to classify the type of wheezing. Experimental results report that: (i) the proposed method outperforms the most recent and relevant state-of-the-art wheezing classification method by approximately 8% in accuracy; (ii) unlike state-of-the-art methods based on classifiers, the proposed method uses an unsupervised approach that does not require any training.
Collapse
Affiliation(s)
- Juan De La Torre Cruz
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Francisco Jesús Cañadas Quesada
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Nicolás Ruiz Reyes
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Sebastián García Galán
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Julio José Carabias Orti
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Gerardo Peréz Chica
- Pneumology Clinical Management Unit of the University Hospital of Jaen, Av. del Ejercito Espanol, 10, 23007 Jaen, Spain;
| |
Collapse
|
58
|
Ulukaya S, Serbes G, Kahya YP. Resonance based separation and energy based classification of lung sounds using tunable wavelet transform. Comput Biol Med 2021; 131:104288. [PMID: 33676336 DOI: 10.1016/j.compbiomed.2021.104288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 02/10/2021] [Accepted: 02/15/2021] [Indexed: 11/18/2022]
Abstract
BACKGROUND AND OBJECTIVE The locations and occurrence pattern of adventitious sounds in the respiratory cycle have critical diagnostic information. In a lung sound sample, the crackles and wheezes may exist individually or they may coexist in a successive/overlapping manner superimposed onto the breath noise. The performance of the linear time-frequency representation based signal decomposition methods has been limited in the crackle/wheeze separation problem due to the common signal components that may arise in both time and frequency domain. However, the proposed resonance based decomposition can be used to isolate crackles and wheezes which behave oppositely in time domain even if they share common frequency bands. METHODS In the proposed study, crackle and/or wheeze containing synthetic and recorded lung-sound signals were decomposed by using the resonance information which is produced by joint application of the Tunable Q-factor Wavelet Transform and Morphological Component Analysis. The crackle localization and signal reconstruction performance of the proposed approach was compared with the previously suggested Independent Component Analysis and Empirical Mode Decomposition methods in a quantitative and qualitative manner. Additionally, the decomposition ability of the proposed approach was also used to discriminate crackle and wheeze waveforms in an unsupervised way by employing signal energy. RESULTS Results have shown that the proposed approach has significant superiority over its competitors in terms of the crackle localization and signal reconstruction ability. Moreover, the calculated energy values have revealed that the transient crackles and rhythmic wheezes can be successfully decomposed into low and high resonance channels by preserving the discriminative information. CONCLUSIONS It is concluded that previous works suffer from deforming the waveform of the crackles whose time domain parameters are vital in computerized diagnostic classification systems. Therefore, a method should provide automatic and simultaneous decomposition ability, with smaller root mean square error and higher accuracy as demonstrated by the proposed approach.
Collapse
Affiliation(s)
- Sezer Ulukaya
- Department of Electrical and Electronics Engineering, Boǧaziçi University, 34342, Istanbul, Turkey; Department of Electrical and Electronics Engineering, Trakya University, 22030, Edirne, Turkey.
| | - Gorkem Serbes
- Department of Biomedical Engineering, Yildiz Technical University, 34220, Istanbul, Turkey.
| | - Yasemin P Kahya
- Department of Electrical and Electronics Engineering, Boǧaziçi University, 34342, Istanbul, Turkey.
| |
Collapse
|
59
|
Horimasu Y, Ohshimo S, Yamaguchi K, Sakamoto S, Masuda T, Nakashima T, Miyamoto S, Iwamoto H, Fujitaka K, Hamada H, Sadamori T, Shime N, Hattori N. A machine-learning based approach to quantify fine crackles in the diagnosis of interstitial pneumonia: A proof-of-concept study. Medicine (Baltimore) 2021; 100:e24738. [PMID: 33607819 PMCID: PMC7899847 DOI: 10.1097/md.0000000000024738] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 01/17/2021] [Indexed: 01/05/2023] Open
Abstract
Fine crackles are frequently heard in patients with interstitial lung diseases (ILDs) and are known as the sensitive indicator for ILDs, although the objective method for analyzing respiratory sounds including fine crackles is not clinically available. We have previously developed a machine-learning-based algorithm which can promptly analyze and quantify the respiratory sounds including fine crackles. In the present proof-of-concept study, we assessed the usefulness of fine crackles quantified by this algorithm in the diagnosis of ILDs.We evaluated the fine crackles quantitative values (FCQVs) in 60 participants who underwent high-resolution computed tomography (HRCT) and chest X-ray in our hospital. Right and left lung fields were evaluated separately.In sixty-seven lung fields with ILDs in HRCT, the mean FCQVs (0.121 ± 0.090) were significantly higher than those in the lung fields without ILDs (0.032 ± 0.023, P < .001). Among those with ILDs in HRCT, the mean FCQVs were significantly higher in those with idiopathic pulmonary fibrosis than in those with other types of ILDs (P = .002). In addition, the increased mean FCQV was associated with the presence of traction bronchiectasis (P = .003) and honeycombing (P = .004) in HRCT. Furthermore, in discriminating ILDs in HRCT, an FCQV-based determination of the presence or absence of fine crackles indicated a higher sensitivity compared to a chest X-ray-based determination of the presence or absence of ILDs.We herein report that the machine-learning-based quantification of fine crackles can predict the HRCT findings of lung fibrosis and can support the prompt and sensitive diagnosis of ILDs.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | - Hironobu Hamada
- Physical Analysis and Therapeutic Sciences, Graduate School of Biomedical and Health Sciences, Hiroshima University 1-2-3 Kasumi, Minami-ku, Hiroshima City, Hiroshima, Japan
| | | | | | | |
Collapse
|
60
|
Multichannel lung sound analysis to detect severity of lung disease in cystic fibrosis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102266] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
61
|
Tabatabaei SAH, Fischer P, Schneider H, Koehler U, Gross V, Sohrabi K. Methods for Adventitious Respiratory Sound Analyzing Applications Based on Smartphones: A Survey. IEEE Rev Biomed Eng 2021; 14:98-115. [PMID: 32746364 DOI: 10.1109/rbme.2020.3002970] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Detection and classification of adventitious acoustic lung sounds plays an important role in diagnosing, monitoring, controlling and, caring the patients with lung diseases. Such systems can be presented as different platforms like medical devices, standalone software or smartphone application. Ubiquity of smartphones and widespread use of the corresponding applications make such a device an attractive platform for hosting the detection and classification systems for adventitious lung sounds. In this paper, the smartphone-based systems for automatic detection and classification of the adventitious lung sounds are surveyed. Such adventitious sounds include cough, wheeze, crackle and, snore. Relevant sounds related to abnormal respiratory activities are considered as well. The methods are shortly described and the analyzing algorithms are explained. The analysis includes detection and/or classification of the sound events. A summary of the main surveyed methods together with the classification parameters and used features for the sake of comparison is given. Existing challenges, open issues and future trends will be discussed as well.
Collapse
|
62
|
A novel system that continuously visualizes and analyzes respiratory sounds to promptly evaluate upper airway abnormalities: a pilot study. J Clin Monit Comput 2021; 36:221-226. [PMID: 33459947 DOI: 10.1007/s10877-020-00641-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Accepted: 12/22/2020] [Indexed: 10/22/2022]
Abstract
Although respiratory sounds are useful indicators for evaluating abnormalities of the upper airway and lungs, the accuracy of their evaluation may be limited. The continuous evaluation and visualization of respiratory sounds has so far been impossible. To resolve these problems, we developed a novel continuous visualization system for assessing respiratory sounds. Our novel system was used to evaluate respiratory abnormalities in two patients. The results were not known until later. The first patient was a 23-year-old man with chronic granulomatous disease and persistent anorexia. During his hospital stay, he exhibited a consciousness disorder, bradypnea, and hypercapnia requiring tracheal intubation. After the administration of muscle relaxant, he suddenly developed acute airway stenosis. Because we could not intubate and ventilate, we performed cricothyroidotomy. Subsequent review of our novel system revealed mild stridor before the onset of acute airway stenosis, which had not been recognized clinically. The second patient was a 74-year-old woman who had been intubated several days earlier for tracheal burn injury, and was extubated after alleviation of her laryngeal edema. After extubation, she gradually developed inspiratory stridor. We re-intubated her after diagnosing post-extubation laryngeal edema. Subsequent review of our novel system revealed serially increased stridor after the extubation, at an earlier time than was recognized by healthcare providers. This unique continuous monitoring and visualization system for respiratory sounds could be an objective tool for improving patient safety regarding airway complications.
Collapse
|
63
|
Feng Y, Wang Y, Zeng C, Mao H. Artificial Intelligence and Machine Learning in Chronic Airway Diseases: Focus on Asthma and Chronic Obstructive Pulmonary Disease. Int J Med Sci 2021; 18:2871-2889. [PMID: 34220314 PMCID: PMC8241767 DOI: 10.7150/ijms.58191] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 05/20/2021] [Indexed: 02/05/2023] Open
Abstract
Chronic airway diseases are characterized by airway inflammation, obstruction, and remodeling and show high prevalence, especially in developing countries. Among them, asthma and chronic obstructive pulmonary disease (COPD) show the highest morbidity and socioeconomic burden worldwide. Although there are extensive guidelines for the prevention, early diagnosis, and rational treatment of these lifelong diseases, their value in precision medicine is very limited. Artificial intelligence (AI) and machine learning (ML) techniques have emerged as effective methods for mining and integrating large-scale, heterogeneous medical data for clinical practice, and several AI and ML methods have recently been applied to asthma and COPD. However, very few methods have significantly contributed to clinical practice. Here, we review four aspects of AI and ML implementation in asthma and COPD to summarize existing knowledge and indicate future steps required for the safe and effective application of AI and ML tools by clinicians.
Collapse
Affiliation(s)
- Yinhe Feng
- Department of Respiratory and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, Sichuan Province, China.,Department of Respiratory and Critical Care Medicine, People's Hospital of Deyang City, Affiliated Hospital of Chengdu College of Medicine, Deyang, Sichuan Province, China
| | - Yubin Wang
- Department of Respiratory and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, Sichuan Province, China
| | - Chunfang Zeng
- Department of Respiratory and Critical Care Medicine, People's Hospital of Deyang City, Affiliated Hospital of Chengdu College of Medicine, Deyang, Sichuan Province, China
| | - Hui Mao
- Department of Respiratory and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, Sichuan Province, China
| |
Collapse
|
64
|
Das N, Topalovic M, Janssens W. AIM in Respiratory Disorders. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_178-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
65
|
Automatic Classification of Adventitious Respiratory Sounds: A (Un)Solved Problem? SENSORS 2020; 21:s21010057. [PMID: 33374363 PMCID: PMC7795327 DOI: 10.3390/s21010057] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 12/12/2020] [Accepted: 12/16/2020] [Indexed: 11/29/2022]
Abstract
(1) Background: Patients with respiratory conditions typically exhibit adventitious respiratory sounds (ARS), such as wheezes and crackles. ARS events have variable duration. In this work we studied the influence of event duration on automatic ARS classification, namely, how the creation of the Other class (negative class) affected the classifiers’ performance. (2) Methods: We conducted a set of experiments where we varied the durations of the other events on three tasks: crackle vs. wheeze vs. other (3 Class); crackle vs. other (2 Class Crackles); and wheeze vs. other (2 Class Wheezes). Four classifiers (linear discriminant analysis, support vector machines, boosted trees, and convolutional neural networks) were evaluated on those tasks using an open access respiratory sound database. (3) Results: While on the 3 Class task with fixed durations, the best classifier achieved an accuracy of 96.9%, the same classifier reached an accuracy of 81.8% on the more realistic 3 Class task with variable durations. (4) Conclusion: These results demonstrate the importance of experimental design on the assessment of the performance of automatic ARS classification algorithms. Furthermore, they also indicate, unlike what is stated in the literature, that the automatic classification of ARS is not a solved problem, as the algorithms’ performance decreases substantially under complex evaluation scenarios.
Collapse
|
66
|
Multi-Time-Scale Features for Accurate Respiratory Sound Classification. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10238606] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The COVID-19 pandemic has amplified the urgency of the developments in computer-assisted medicine and, in particular, the need for automated tools supporting the clinical diagnosis and assessment of respiratory symptoms. This need was already clear to the scientific community, which launched an international challenge in 2017 at the International Conference on Biomedical Health Informatics (ICBHI) for the implementation of accurate algorithms for the classification of respiratory sound. In this work, we present a framework for respiratory sound classification based on two different kinds of features: (i) short-term features which summarize sound properties on a time scale of tenths of a second and (ii) long-term features which assess sounds properties on a time scale of seconds. Using the publicly available dataset provided by ICBHI, we cross-validated the classification performance of a neural network model over 6895 respiratory cycles and 126 subjects. The proposed model reached an accuracy of 85%±3% and an precision of 80%±8%, which compare well with the body of literature. The robustness of the predictions was assessed by comparison with state-of-the-art machine learning tools, such as the support vector machine, Random Forest and deep neural networks. The model presented here is therefore suitable for large-scale applications and for adoption in clinical practice. Finally, an interesting observation is that both short-term and long-term features are necessary for accurate classification, which could be the subject of future studies related to its clinical interpretation.
Collapse
|
67
|
Leelavathy S, Nithya M. Public opinion mining using natural language processing technique for improvisation towards smart city. INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY 2020; 24:561-569. [PMID: 33199973 PMCID: PMC7656096 DOI: 10.1007/s10772-020-09766-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Accepted: 10/29/2020] [Indexed: 06/11/2023]
Abstract
In this digital world integrating smart city concepts, there is a tremendous scope and need for e-governance applications. Now people analyze the opinion of others before purchasing any product, hotel booking, stepping onto restaurants etc. and the respective user share their experience as a feedback towards the service. But there is no e-governance platform to obtain public opinion grievances towards covid19, government new laws, policies etc. With the growing availability and emergence of opinion rich information's, new opportunities and challenges might arise in developing a technology for mining the huge set of public messages, opinions and alert the respective departments to take necessary actions and also nearby ambulances if its related to covid-19. To overcome this pandemic situation a natural language processing based efficient e-governance platform is demandful to detect the corona positive patients and provide transparency on the covid count and also alert the respective health ministry and nearby ambulance based on the user voice inputs. To convert the public voice messages into text, we used Hidden Markov Models (HMMs). To identify respective government department responsible for the respective user voice input, we perform pre-processing, part of speech, unigram, bigram, trigram analysis and fuzzy logic (machine learning technique). After identifying the responsible department, we perform 2 methods, (1) Automatic alert e-mail and message to the government departmental officials and nearby ambulance or covid camp if the user input is related to covis19. (2) Ticketing system for public and government officials monitoring. For experimental results, we used Java based web and mobile application to execute the proposed methodology. Integration of HMM, Fuzzy logic provides promising results.
Collapse
Affiliation(s)
- S. Leelavathy
- Department of Computer Science and Engineering, Aarupadai Veedu Institute of Technology, Vinayaka Missions Research Foundation (Deemed to be University), Paiyanoor, India
| | - M. Nithya
- Department of Computer Science and Engineering, Vinayaka Mission’s Kirupananda Variyar Engineering College, Vinayaka Missions Research Foundation (Deemed to be University), Salem, India
| |
Collapse
|
68
|
Renjini A, Raj V, Swapna MS, Sreejyothi S, Sankararaman S. Phase portrait for high fidelity feature extraction and classification: A surrogate approach. CHAOS (WOODBURY, N.Y.) 2020; 30:113122. [PMID: 33261330 DOI: 10.1063/5.0020121] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Accepted: 10/22/2020] [Indexed: 06/12/2023]
Abstract
This paper proposes a novel surrogate method of classification of breath sound signals for auscultation through the principal component analysis (PCA), extracting the features of a phase portrait. The nonlinear parameters of the phase portrait like the Lyapunov exponent, the sample entropy, the fractal dimension, and the Hurst exponent help in understanding the degree of complexity arising due to the turbulence of air molecules in the airways of the lungs. Thirty-nine breath sound signals of bronchial breath (BB) and pleural rub (PR) are studied through spectral, fractal, and phase portrait analyses. The fast Fourier transform and wavelet analyses show a lesser number of high-intense, low-frequency components in PR, unlike BB. The fractal dimension and sample entropy values for PR are, respectively, 1.772 and 1.041, while those for BB are 1.801 and 1.331, respectively. This study reveals that the BB signal is more complex and random, as evidenced by the fractal dimension and sample entropy values. The signals are classified by PCA based on the features extracted from the power spectral density (PSD) data and the features of the phase portrait. The PCA based on the features of the phase portrait considers the temporal correlation of the signal amplitudes and that based on the PSD data considers only the signal amplitudes, suggesting that the former method is better than the latter as it reflects the multidimensional aspects of the signal. This appears in the PCA-based classification as 89.6% for BB, a higher variance than the 80.5% for the PR signal, suggesting the higher fidelity of the phase portrait-based classification.
Collapse
Affiliation(s)
- A Renjini
- Department of Optoelectronics, University of Kerala, Trivandrum 695581, Kerala, India
| | - Vimal Raj
- Department of Optoelectronics, University of Kerala, Trivandrum 695581, Kerala, India
| | - M S Swapna
- Department of Optoelectronics, University of Kerala, Trivandrum 695581, Kerala, India
| | - S Sreejyothi
- Department of Optoelectronics, University of Kerala, Trivandrum 695581, Kerala, India
| | - S Sankararaman
- Department of Optoelectronics, University of Kerala, Trivandrum 695581, Kerala, India
| |
Collapse
|
69
|
Raj V, Renjini A, Swapna MS, Sreejyothi S, Sankararaman S. Nonlinear time series and principal component analyses: Potential diagnostic tools for COVID-19 auscultation. CHAOS, SOLITONS, AND FRACTALS 2020; 140:110246. [PMID: 32863618 PMCID: PMC7444955 DOI: 10.1016/j.chaos.2020.110246] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 08/23/2020] [Indexed: 05/12/2023]
Abstract
The development of novel digital auscultation techniques has become highly significant in the context of the outburst of the pandemic COVID 19. The present work reports the spectral, nonlinear time series, fractal, and complexity analysis of vesicular (VB) and bronchial (BB) breath signals. The analysis is carried out with 37 breath sound signals. The spectral analysis brings out the signatures of VB and BB through the power spectral density plot and wavelet scalogram. The dynamics of airflow through the respiratory tract during VB and BB are investigated using the nonlinear time series and complexity analyses in terms of the phase portrait, fractal dimension, Hurst exponent, and sample entropy. The higher degree of chaoticity in BB relative to VB is unwrapped through the maximal Lyapunov exponent. The principal component analysis helps in classifying VB and BB sound signals through the feature extraction from the power spectral density data. The method proposed in the present work is simple, cost-effective, and sensitive, with a far-reaching potential of addressing and diagnosing the current issue of COVID 19 through lung auscultation.
Collapse
Affiliation(s)
- Vimal Raj
- Department of Optoelectronics, University of Kerala, Trivandrum, Kerala, India- 695581
| | - A Renjini
- Department of Optoelectronics, University of Kerala, Trivandrum, Kerala, India- 695581
| | - M S Swapna
- Department of Optoelectronics, University of Kerala, Trivandrum, Kerala, India- 695581
| | - S Sreejyothi
- Department of Optoelectronics, University of Kerala, Trivandrum, Kerala, India- 695581
| | - S Sankararaman
- Department of Optoelectronics, University of Kerala, Trivandrum, Kerala, India- 695581
| |
Collapse
|
70
|
Chen S, Huang M, Peng X, Yuan Y, Huang S, Ye Y, Zhao W, Li B, Han H, Yang S, Cai S, Zhao H. [Lung sounds can be used as an indicator for assessing severity of chronic obstructive pulmonary disease at the initial diagnosis]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2020; 40:177-182. [PMID: 32376545 DOI: 10.12122/j.issn.1673-4254.2020.02.07] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
OBJECTIVE To assess the value of pulmonary auscultation for evaluating the severity of chronic obstructive pulmonary disease (COPD) at the initial diagnosis. METHODS The patients with newly diagnosed COPD in our hospital between May, 2016 and May, 2019 were enrolled in this study. According to the findings of pulmonary auscultation, the lung sounds were classified into 5 groups: normal breathing sounds, weakened breathing sounds, weakened breathing sounds with wheezing, obviously weakened breathing sounds, and obviously weakened breathing sounds with wheezing. The pulmonary function of the patients was graded according to GOLD guidelines, and the differential diagnosis of COPD from asthmatic asthma COPD overlap (ACO) was made based on the GOLD guidelines and the European Respiratory Criteria. RESULTS A total of 1046 newly diagnosed COPD patients were enrolled, including 949 male and 97 female patients with a mean age of 62.6± 8.71. According to the GOLD criteria, 88.1% of the patients were identified to have moderate or above COPD, 50.0% to have severe or above COPD; a further diagnosis of ACO was made in 347 (33.2%) of the patients. ANOVA analysis showed significant differences in disease course, FEV1, FEV1%, FEV1/FVC, FVC, FVC% and mMRC among the 5 auscultation groups (P < 0.001), but FENO did not differ significantly among them (P=0.097). The percentage of patients with wheezing in auscultation was significantly greater in ACO group than in COPD group (P < 0.001). Spearman correlation analysis showed that lung sounds was significantly correlated with disease severity, FEV1, FEV1%, FVC and FVC% of the patients (P < 0.001); Multiple linear regression analysis showed that a longer disease course, a history of smoking and lung sounds were all associated with poorer lung functions and a greater disease severity. CONCLUSIONS Lung sounds can be used as an indicator for assessing the severity of COPD at the initial diagnosis.
Collapse
Affiliation(s)
- Shifeng Chen
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Minyu Huang
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Xianru Peng
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Yafei Yuan
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Shuyu Huang
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Yanmei Ye
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Wenqu Zhao
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Bohou Li
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Huishan Han
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Shuluan Yang
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Shaoxi Cai
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Haijin Zhao
- Laboratory of Chronic Airway Diseases, Department of Respiratory and Critical Care Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
71
|
Acharya J, Basu A. Deep Neural Network for Respiratory Sound Classification in Wearable Devices Enabled by Patient Specific Model Tuning. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2020; 14:535-544. [PMID: 32191898 DOI: 10.1109/tbcas.2020.2981172] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The primary objective of this paper is to build classification models and strategies to identify breathing sound anomalies (wheeze, crackle) for automated diagnosis of respiratory and pulmonary diseases. In this work we propose a deep CNN-RNN model that classifies respiratory sounds based on Mel-spectrograms. We also implement a patient specific model tuning strategy that first screens respiratory patients and then builds patient specific classification models using limited patient data for reliable anomaly detection. Moreover, we devise a local log quantization strategy for model weights to reduce the memory footprint for deployment in memory constrained systems such as wearable devices. The proposed hybrid CNN-RNN model achieves a score of [Formula: see text] on four-class classification of breathing cycles for ICBHI'17 scientific challenge respiratory sound database. When the model is re-trained with patient specific data, it produces a score of [Formula: see text] for leave-one-out validation. The proposed weight quantization technique achieves ≈ 4 × reduction in total memory cost without loss of performance. The main contribution of the paper is as follows: Firstly, the proposed model is able to achieve state of the art score on the ICBHI'17 dataset. Secondly, deep learning models are shown to successfully learn domain specific knowledge when pre-trained with breathing data and produce significantly superior performance compared to generalized models. Finally, local log quantization of trained weights is shown to be able to reduce the memory requirement significantly. This type of patient-specific re-training strategy can be very useful in developing reliable long-term automated patient monitoring systems particularly in wearable healthcare solutions.
Collapse
|
72
|
Multi-channel lung sound classification with convolutional recurrent neural networks. Comput Biol Med 2020; 122:103831. [PMID: 32658732 DOI: 10.1016/j.compbiomed.2020.103831] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2019] [Revised: 05/18/2020] [Accepted: 05/21/2020] [Indexed: 11/20/2022]
Abstract
In this paper, we present an approach for multi-channel lung sound classification, exploiting spectral, temporal and spatial information. In particular, we propose a frame-wise classification framework to process full breathing cycles of multi-channel lung sound recordings with a convolutional recurrent neural network. With our recently developed 16-channel lung sound recording device, we collect lung sound recordings from lung-healthy subjects and patients with idiopathic pulmonary fibrosis (IPF), within a clinical trial. From the lung sound recordings, we extract spectrogram features and compare different deep neural network architectures for binary classification, i.e. healthy vs. pathological. Our proposed classification framework with the convolutional recurrent neural network outperforms the other networks by achieving an F-score of F1≈92%. Together with our multi-channel lung sound recording device, we present a holistic approach to multi-channel lung sound analysis.
Collapse
|
73
|
Gonem S, Janssens W, Das N, Topalovic M. Applications of artificial intelligence and machine learning in respiratory medicine. Thorax 2020; 75:695-701. [PMID: 32409611 DOI: 10.1136/thoraxjnl-2020-214556] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 04/19/2020] [Accepted: 04/22/2020] [Indexed: 02/06/2023]
Abstract
The past 5 years have seen an explosion of interest in the use of artificial intelligence (AI) and machine learning techniques in medicine. This has been driven by the development of deep neural networks (DNNs)-complex networks residing in silico but loosely modelled on the human brain-that can process complex input data such as a chest radiograph image and output a classification such as 'normal' or 'abnormal'. DNNs are 'trained' using large banks of images or other input data that have been assigned the correct labels. DNNs have shown the potential to equal or even surpass the accuracy of human experts in pattern recognition tasks such as interpreting medical images or biosignals. Within respiratory medicine, the main applications of AI and machine learning thus far have been the interpretation of thoracic imaging, lung pathology slides and physiological data such as pulmonary function tests. This article surveys progress in this area over the past 5 years, as well as highlighting the current limitations of AI and machine learning and the potential for future developments.
Collapse
Affiliation(s)
- Sherif Gonem
- Department of Respiratory Medicine, Nottingham University Hospitals NHS Trust, Nottingham, UK .,Division of Respiratory Medicine, University of Nottingham, Nottingham, UK
| | - Wim Janssens
- Department of Chronic Diseases, Metabolism and Ageing, KU Leuven, Leuven, Belgium.,Department of Respiratory Diseases, University Hospitals Leuven, Leuven, Belgium
| | - Nilakash Das
- Department of Chronic Diseases, Metabolism and Ageing, KU Leuven, Leuven, Belgium
| | - Marko Topalovic
- Department of Chronic Diseases, Metabolism and Ageing, KU Leuven, Leuven, Belgium.,ArtiQ NV, Leuven, Belgium
| |
Collapse
|
74
|
Hsu F, How CH, Huang SR, Chen YT, Chen JS, Hsin HT. Locating stridor caused by tumor compression by using a multichannel electronic stethoscope: a case report. J Clin Monit Comput 2020; 35:663-670. [PMID: 32388652 PMCID: PMC7224060 DOI: 10.1007/s10877-020-00517-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 04/27/2020] [Indexed: 12/04/2022]
Abstract
A 67-year-old male patient with chronic obstructive pulmonary disease was admitted to a hospital in northern Taiwan for progressive dyspnea and productive cough with an enlarged left upper lobe tumor (5.3 × 6.8 × 3.9 cm3). Previous chest auscultation on outpatient visits had yielded diffuse wheezes. A localized stridor (fundamental frequency of 125 Hz) was captured using a multichannel electronic stethoscope comprising four microelectromechanical system microphones. An energy-based localization algorithm was used to successfully locate the sound source of the stridor caused by tumor compression. The results of the algorithm were compatible with the findings obtained from computed tomography and bronchoscopy (mean radius = 9.40 mm and radial standard deviation = 14.97 mm). We demonstrated a potential diagnostic aid for pulmonary diseases through sound-source localization technology based on respiratory monitoring. The proposed technique can facilitate detection when advanced imaging tools are not immediately available. Continuing effort on the development of more precise estimation is warranted.
Collapse
Affiliation(s)
- Fushun Hsu
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.,Department of Critical Care Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Cheng-Hung How
- Division of Thoracic Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Shang-Ran Huang
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Yi-Tsun Chen
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Jin-Shing Chen
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Ho-Tsung Hsin
- Division of Cardiovascular Medicine, Far Eastern Memorial Hospital, No. 21, Sec. 2, Nanya South Road, 22060, Banqiao, New Taipei, Taiwan.
| |
Collapse
|
75
|
Muthusamy PD, Sundaraj K, Abd Manap N. Computerized acoustical techniques for respiratory flow-sound analysis: a systematic review. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09769-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
76
|
Speranza CG, da Ponte DF, da Rocha CAF, Moraes R. Blind Equalization of Lung Crackle Sounds to Compensate Chest Attenuation. IEEE J Biomed Health Inform 2019; 24:1796-1804. [PMID: 31581103 DOI: 10.1109/jbhi.2019.2944995] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Diseased lungs generate adventitious sounds that propagate through the thorax, reaching the surface where they may be heard or recorded. The attenuation imposed to the lung sounds by the thorax depends on the physical characteristics of each patient, hampering the analysis of quantitative indexes measured to assist the diagnosis of cardiorespiratory disorders. This work proposes the application of a blind equalizer (eigenvector algorithm - EVA) to reduce the effects of thorax attenuation on indexes measured from crackle sounds. Computer simulated crackles (acquired on the posterior chest wall after being applied to volunteer's mouth) and actual crackles belonging to a database were equalized. Quantitative indexes were measured from crackles before and after equalization. Comparison of indexes measured from simulated crackles reveals that the equalizer improves the results due to attenuation compensation and removal of Gaussian noise. Effects of equalization on indexes measured from actual crackles were qualitatively assessed. Results point out that blind equalization of crackles recorded on the thorax provides more consistent quantitative indexes to assist the diagnosis of different cardiorespiratory diseases.
Collapse
|
77
|
Chen H, Yuan X, Li J, Pei Z, Zheng X. Automatic Multi-Level In-Exhale Segmentation and Enhanced Generalized S-Transform for wheezing detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:163-173. [PMID: 31416545 DOI: 10.1016/j.cmpb.2019.06.024] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 06/09/2019] [Accepted: 06/25/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Wheezing is a common symptom of patients caused by asthma and chronic obstructive pulmonary diseases. Wheezing detection identifies wheezing lung sounds and helps physicians in diagnosis, monitoring, and treatment of pulmonary diseases. Different from the traditional way to detect wheezing sounds using digital image process methods, automatic wheezing detection uses computerized tools or algorithms to objectively and accurately assess and evaluate lung sounds. We propose an innovative machine learning-based approach for wheezing detection. The phases of the respiratory sounds are separated automatically and the wheezing features are extracted accordingly to improve the classification accuracy. METHODS To enhance the features of wheezing for classification, the Adaptive Multi-Level In-Exhale Segmentation (AMIE_SEG) is proposed to automatically and precisely segment the respiratory sounds into inspiratory and expiratory phases. Furthermore, the Enhanced Generalized S-Transform (EGST) is proposed to extract the wheezing features. The highlighted features of wheezing improve the accuracy of wheezing detection with machine learning-based classifiers. RESULTS To evaluate the novelty and superiority of the proposed AMIE_SEG and EGST for wheezing detection, we employ three machine learning-based classifiers, Support Vector Machine (SVM), Extreme Learning Machine (ELM) and K-Nearest Neighbor (KNN), with public datasets at segment level and record level respectively. According to the experimental results, the proposed method performs the best using the KNN classifier at segment level, with the measured accuracy, sensitivity, specificity as 98.62%, 95.9% and 99.3% in average respectively. On the other aspect, at record level, the three classifiers perform excellent, with the accuracy, sensitivity, specificity up to 99.52%, 100% and 99.27% respectively. We validate the method with public respiratory sounds dataset. CONCLUSION The comparison results indicate the very good performance of the proposed methods for long-term wheezing monitoring and telemedicine.
Collapse
Affiliation(s)
- Hai Chen
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; School of Information Technology, Beijing Normal University, Zhuhai, Zhuhai, China.
| | - Xiaochen Yuan
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau.
| | - Jianqing Li
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau.
| | - Zhiyuan Pei
- School of Information Technology, Beijing Normal University, Zhuhai, Zhuhai, China.
| | - Xiaobin Zheng
- Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China.
| |
Collapse
|
78
|
Ntalampiras SA, Ludovico LA, Presti G, Prato Previde EP, Battini M, Cannas S, Palestrini C, Mattiello S. Automatic Classification of Cat VocalizationsEmitted in Different Contexts. Animals (Basel) 2019; 9:ani9080543. [PMID: 31405018 PMCID: PMC6719916 DOI: 10.3390/ani9080543] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 07/25/2019] [Accepted: 08/08/2019] [Indexed: 02/07/2023] Open
Abstract
Simple Summary Cat vocalizations are their basic means of communication. They are particularly important in assessing their welfare status since they are indicative of information associated with the environment they were produced, the animal’s emotional state, etc. As such, this work proposes a fully automatic framework with the ability to process such vocalizations and reveal the context in which they were produced. To this end, we used suitable audio signal processing and pattern recognition algorithms. We recorded vocalizations from Maine Coon and European Shorthair breeds emitted in three different contexts, namely waiting for food, isolation in unfamiliar environment, and brushing. The obtained results are excellent, rendering the proposed framework particularly useful towards a better understanding of the acoustic communication between humans and cats. Abstract Cats employ vocalizations for communicating information, thus their sounds can carry a wide range of meanings. Concerning vocalization, an aspect of increasing relevance directly connected with the welfare of such animals is its emotional interpretation and the recognition of the production context. To this end, this work presents a proof of concept facilitating the automatic analysis of cat vocalizations based on signal processing and pattern recognition techniques, aimed at demonstrating if the emission context can be identified by meowing vocalizations, even if recorded in sub-optimal conditions. We rely on a dataset including vocalizations of Maine Coon and European Shorthair breeds emitted in three different contexts: waiting for food, isolation in unfamiliar environment, and brushing. Towards capturing the emission context, we extract two sets of acoustic parameters, i.e., mel-frequency cepstral coefficients and temporal modulation features. Subsequently, these are modeled using a classification scheme based on a directed acyclic graph dividing the problem space. The experiments we conducted demonstrate the superiority of such a scheme over a series of generative and discriminative classification solutions. These results open up new perspectives for deepening our knowledge of acoustic communication between humans and cats and, in general, between humans and animals.
Collapse
Affiliation(s)
| | | | - Giorgio Presti
- Department of Computer Science, University of Milan, 20133 Milan, Italy.
| | | | - Monica Battini
- Department of Veterinary Medicine, University of Milan, 20133 Milan, Italy.
| | - Simona Cannas
- Department of Veterinary Medicine, University of Milan, 20133 Milan, Italy.
| | - Clara Palestrini
- Department of Veterinary Medicine, University of Milan, 20133 Milan, Italy.
| | - Silvana Mattiello
- Department of Veterinary Medicine, University of Milan, 20133 Milan, Italy.
| |
Collapse
|
79
|
Pereira CA, Soares MR, Boaventura R, Castro MD, Gomes PS, Gimenez A, Fukuda C, Cerezoli M, Missrie I. Squawks in interstitial lung disease prevalence and causes in a cohort of one thousand patients. Medicine (Baltimore) 2019; 98:e16419. [PMID: 31335692 PMCID: PMC6709015 DOI: 10.1097/md.0000000000016419] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Squawks are lung adventitious sounds with a mix of both musical and nonmusical components heard during the inspiratory phase. Small series have described squawks in interstitial lung diseases. Hypersensitivity pneumonitis and other diseases involving small airways can result in squawks, but new interstitial lung diseases (ILDs) involving peripheral airways are being described. A retrospective analysis was performed on 1000 consecutive patients from a database of ILD of a tertiary referral center. Squawks were recorded in 49 cases (4.9%), hypersensitivity pneumonitis (23 cases), connective tissue disease (7), microaspiration (4), pleuroparenchymal fibroelastosis (4), fibrosing cryptogenic organizing pneumonia (, 3), familial ILD (2), sarcoidosis (2), idiopathic pulmonary fibrosis (IPF; 1), bronchiolitis (2), and nonspecific interstitial pneumonia (1). One patient had a final diagnosis of IPF. There was a significant association between mosaic pattern and squawks: 20 cases with squawks (40.8%) had mosaic pattern compared with 140 (14.7%) cases without squawks (x = 23.6, P < .001).Findings indicative of fibrosis were described on high-resolution chest tomography (HRCT) in 715 cases (71.5%). Squawks were more common in patients with findings indicative of fibrosis on HRCT: 45 of 715 (6.3%) compared with 4 of 285 (1.4%) of those without findings indicative of fibrosis (x = 10.46, P = .001).In conclusion, squawks are an uncommon finding on physical examination in patients with ILD, but when present suggest fibrosing ILD associated with bronchiolar involvement. However, squawks are rare in IPF.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Cesar Fukuda
- Interstitial Lung Diseases Program, Pulmonology Service
| | | | - Israel Missrie
- Radiology Service, São Paulo Federal University, São Paulo, Brazil
| |
Collapse
|
80
|
Nabi FG, Sundaraj K, Lam CK. Identification of asthma severity levels through wheeze sound characterization and classification using integrated power features. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.04.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
81
|
Ghulam Nabi F, Sundaraj K, Chee Kiang L, Palaniappan R, Sundaraj S. Wheeze sound analysis using computer-based techniques: a systematic review. ACTA ACUST UNITED AC 2019; 64:1-28. [PMID: 29087951 DOI: 10.1515/bmt-2016-0219] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Accepted: 08/24/2017] [Indexed: 11/15/2022]
Abstract
Wheezes are high pitched continuous respiratory acoustic sounds which are produced as a result of airway obstruction. Computer-based analyses of wheeze signals have been extensively used for parametric analysis, spectral analysis, identification of airway obstruction, feature extraction and diseases or pathology classification. While this area is currently an active field of research, the available literature has not yet been reviewed. This systematic review identified articles describing wheeze analyses using computer-based techniques on the SCOPUS, IEEE Xplore, ACM, PubMed and Springer and Elsevier electronic databases. After a set of selection criteria was applied, 41 articles were selected for detailed analysis. The findings reveal that 1) computerized wheeze analysis can be used for the identification of disease severity level or pathology, 2) further research is required to achieve acceptable rates of identification on the degree of airway obstruction with normal breathing, 3) analysis using combinations of features and on subgroups of the respiratory cycle has provided a pathway to classify various diseases or pathology that stem from airway obstruction.
Collapse
Affiliation(s)
- Fizza Ghulam Nabi
- School of Mechatronic Engineering, Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia, Phone: +601111519452
| | - Kenneth Sundaraj
- Faculty of Electronics and Computer Engineering, Universiti Teknikal Malaysia Melaka (UTeM), 76100 Durian Tunggal, Melaka, Malaysia
| | - Lam Chee Kiang
- School of Mechatronic Engineering, Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia
| | - Rajkumar Palaniappan
- School of Electronics Engineering, Vellore Institute of Technology (VIT), Tamil Nadu 632014, India
| | - Sebastian Sundaraj
- Department of Anesthesiology, Hospital Tengku Ampuan Rahimah (HTAR), 41200 Klang, Selangor, Malaysia
| |
Collapse
|
82
|
Rocha BM, Filos D, Mendes L, Serbes G, Ulukaya S, Kahya YP, Jakovljevic N, Turukalo TL, Vogiatzis IM, Perantoni E, Kaimakamis E, Natsiavas P, Oliveira A, Jácome C, Marques A, Maglaveras N, Pedro Paiva R, Chouvarda I, de Carvalho P. An open access database for the evaluation of respiratory sound classification algorithms. Physiol Meas 2019; 40:035001. [PMID: 30708353 DOI: 10.1088/1361-6579/ab03ea] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Over the last few decades, there has been significant interest in the automatic analysis of respiratory sounds. However, currently there are no publicly available large databases with which new algorithms can be evaluated and compared. Further developments in the field are dependent on the creation of such databases. APPROACH This paper describes a public respiratory sound database, which was compiled for an international competition, the first scientific challenge of the IFMBE's International Conference on Biomedical and Health Informatics. The database includes 920 recordings acquired from 126 participants and two sets of annotations. One set contains 6898 annotated respiratory cycles, some including crackles, wheezes, or a combination of both, and some with no adventitious respiratory sounds. In the other set, precise locations of 10 775 events of crackles and wheezes were annotated. MAIN RESULTS The best system that participated in the challenge achieved an average score of 52.5% with the respiratory cycle annotations and an average score of 91.2% with the event annotations. SIGNIFICANCE The creation and public release of this database will be useful to the research community and could bring attention to the respiratory sound classification problem.
Collapse
Affiliation(s)
- Bruno M Rocha
- Department of Informatics Engineering, Centre for Informatics and Systems (CISUC), University of Coimbra, Coimbra, Portugal. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
83
|
Evaluation of features for classification of wheezes and normal respiratory sounds. PLoS One 2019; 14:e0213659. [PMID: 30861052 PMCID: PMC6414007 DOI: 10.1371/journal.pone.0213659] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Accepted: 02/26/2019] [Indexed: 12/01/2022] Open
Abstract
Chronic Respiratory Diseases (CRDs), such as Asthma and Chronic Obstructive Pulmonary Disease (COPD), are leading causes of deaths worldwide. Although both Asthma and COPD are not curable, they can be managed by close monitoring of symptoms to prevent worsening of the condition. One key symptom that needs to be monitored is the occurrence of wheezing sounds during breathing since its early identification could prevent serious exacerbations. Since wheezing can happen randomly without warning, a long-term monitoring system with automatic wheeze detection could be extremely helpful to manage these respiratory diseases. This study evaluates the discriminatory ability of different types of feature used in previous related studies, with a total size of 105 individual features, for automatic identification of wheezing sound during breathing. A linear classifier is used to determine the best features for classification by evaluating several performance metrics, including ranksum statistical test, area under the sensitivity-–specificity curve (AUC), F1 score, Matthews Correlation Coefficient (MCC), and relative computation time. Tonality index attained the highest effect size, at 87.95%, and was found to be the feature with the lowest p-value when ranksum significance test was performed. Third MFCC coefficient achieved the highest AUC and average optimum F1 score at 0.8919 and 82.67% respectively, while the highest average optimum MCC was obtained by the first coefficient of a 6th order LPC. The best possible combination of two and three features for wheeze detection is also studied. The study concludes with an analysis of the different trade-offs between accuracy, reliability, and computation requirements of the different features since these will be highly useful for researchers when designing algorithms for automatic wheeze identification.
Collapse
|
84
|
Artificial intelligence in diagnosis of obstructive lung disease: current status and future potential. Curr Opin Pulm Med 2019; 24:117-123. [PMID: 29251699 DOI: 10.1097/mcp.0000000000000459] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence in the diagnosis of obstructive lung diseases is an exciting phenomenon. Artificial intelligence algorithms work by finding patterns in data obtained from diagnostic tests, which can be used to predict clinical outcomes or to detect obstructive phenotypes. The purpose of this review is to describe the latest trends and to discuss the future potential of artificial intelligence in the diagnosis of obstructive lung diseases. RECENT FINDINGS Machine learning has been successfully used in automated interpretation of pulmonary function tests for differential diagnosis of obstructive lung diseases. Deep learning models such as convolutional neural network are state-of-the art for obstructive pattern recognition in computed tomography. Machine learning has also been applied in other diagnostic approaches such as forced oscillation test, breath analysis, lung sound analysis and telemedicine with promising results in small-scale studies. SUMMARY Overall, the application of artificial intelligence has produced encouraging results in the diagnosis of obstructive lung diseases. However, large-scale studies are still required to validate current findings and to boost its adoption by the medical community.
Collapse
|
85
|
Cherrez-Ojeda I, Felix M, Vanegas E, Mata VL, Jimenez FM, Ugarte Fornell LG. Rhonchus and Valve-Like Sensation as Initial Manifestations of Long-Standing Foreign Body Aspiration: A Case Report. AMERICAN JOURNAL OF CASE REPORTS 2019; 20:70-73. [PMID: 30651531 PMCID: PMC6345106 DOI: 10.12659/ajcr.913405] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Patient: Male, 52 Final Diagnosis: Foreign body aspiration Symptoms: Rhonchus • thoracic valve-like sensation Medication: — Clinical Procedure: — Specialty: Pulmonology
Collapse
Affiliation(s)
- Ivan Cherrez-Ojeda
- Universidad Espíritu Santo (Holy Spirit University), Samborondón, Ecuador.,Respiralab Research Group, Guayaquil, Ecuador
| | - Miguel Felix
- Universidad Espíritu Santo (Holy Spirit University), Samborondón, Ecuador.,Respiralab Research Group, Guayaquil, Ecuador
| | - Emanuel Vanegas
- Universidad Espíritu Santo (Holy Spirit University), Samborondón, Ecuador.,Respiralab Research Group, Guayaquil, Ecuador
| | - Valeria L Mata
- Universidad Espíritu Santo (Holy Spirit University), Samborondón, Ecuador.,Respiralab Research Group, Guayaquil, Ecuador
| | - Fanny M Jimenez
- Universidad Espíritu Santo (Holy Spirit University), Samborondón, Ecuador.,Respiralab Research Group, Guayaquil, Ecuador
| | | |
Collapse
|
86
|
Ulukaya S, Serbes G, Kahya YP. Wheeze type classification using non-dyadic wavelet transform based optimal energy ratio technique. Comput Biol Med 2018; 104:175-182. [PMID: 30496939 DOI: 10.1016/j.compbiomed.2018.11.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 11/03/2018] [Accepted: 11/05/2018] [Indexed: 10/27/2022]
Abstract
BACKGROUND AND OBJECTIVE Wheezes in pulmonary sounds are anomalies which are often associated with obstructive type of lung diseases. The previous works on wheeze-type classification focused mainly on using fixed time-frequency/scale resolution based on Fourier and wavelet transforms. The main contribution of the proposed method, in which the time-scale resolution can be tuned according to the signal of interest, is to discriminate monophonic and polyphonic wheezes with higher accuracy than previously suggested time and time-frequency/scale based methods. METHODS An optimal Rational Dilation Wavelet Transform (RADWT) based peak energy ratio (PER) parameter selection method is proposed to discriminate wheeze types. Previously suggested Quartile Frequency Ratios, Mean Crossing Irregularity, Multiple Signal Classification, Mel-frequency Cepstrum and Dyadic Discrete Wavelet Transform approaches are also applied and the superiority of the proposed method is demonstrated in leave-one-out (LOO) and leave-one-subject-out (LOSO) cross validation schemes with support vector machine (SVM), k nearest neighbor (k-NN) and extreme learning machine (ELM) classifiers. RESULTS The results show that the proposed RADWT based method outperforms the state-of-the-art time, frequency, time-frequency and time-scale domain approaches for all classifiers in both LOO and LOSO cross validation settings. The highest accuracy values are obtained as 86% and 82.9% in LOO and LOSO respectively when the proposed PER features are fed into SVM. CONCLUSIONS It is concluded that time and frequency domain characteristics of wheezes are not steady and hence, tunable time-scale representations are more successful in discriminating polyphonic and monophonic wheezes when compared with conventional fixed resolution representations.
Collapse
Affiliation(s)
- Sezer Ulukaya
- Department of Electrical and Electronics Engineering, Boǧaziçi University, 34342, Istanbul, Turkey; Department of Electrical and Electronics Engineering, Trakya University, 22030, Edirne, Turkey.
| | - Gorkem Serbes
- Department of Biomedical Engineering, Yildiz Technical University, 34220, Istanbul, Turkey.
| | - Yasemin P Kahya
- Department of Electrical and Electronics Engineering, Boǧaziçi University, 34342, Istanbul, Turkey.
| |
Collapse
|
87
|
Characterization and classification of asthmatic wheeze sounds according to severity level using spectral integrated features. Comput Biol Med 2018; 104:52-61. [PMID: 30439599 DOI: 10.1016/j.compbiomed.2018.10.035] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 10/31/2018] [Accepted: 10/31/2018] [Indexed: 11/21/2022]
Abstract
OBJECTIVE This study aimed to investigate and classify wheeze sounds of asthmatic patients according to their severity level (mild, moderate and severe) using spectral integrated (SI) features. METHOD Segmented and validated wheeze sounds were obtained from auscultation recordings of the trachea and lower lung base of 55 asthmatic patients during tidal breathing manoeuvres. The segments were multi-labelled into 9 groups based on the auscultation location and/or breath phases. Bandwidths were selected based on the physiology, and a corresponding SI feature was computed for each segment. Univariate and multivariate statistical analyses were then performed to investigate the discriminatory behaviour of the features with respect to the severity levels in the various groups. The asthmatic severity levels in the groups were then classified using the ensemble (ENS), support vector machine (SVM) and k-nearest neighbour (KNN) methods. RESULTS AND CONCLUSION All statistical comparisons exhibited a significant difference (p < 0.05) among the severity levels with few exceptions. In the classification experiments, the ensemble classifier exhibited better performance in terms of sensitivity, specificity and positive predictive value (PPV). The trachea inspiratory group showed the highest classification performance compared with all the other groups. Overall, the best PPV for the mild, moderate and severe samples were 95% (ENS), 88% (ENS) and 90% (SVM), respectively. With respect to location, the tracheal related wheeze sounds were most sensitive and specific predictors of asthma severity levels. In addition, the classification performances of the inspiratory and expiratory related groups were comparable, suggesting that the samples from these locations are equally informative.
Collapse
|
88
|
Speranza CG, Moraes R. Instantaneous frequency based index to characterize respiratory crackles. Comput Biol Med 2018; 102:21-29. [PMID: 30240835 DOI: 10.1016/j.compbiomed.2018.09.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Revised: 09/11/2018] [Accepted: 09/11/2018] [Indexed: 11/16/2022]
Abstract
BACKGROUND Crackle is a lung sound widely employed by health staff to identify respiratory diseases. The two-cycle duration (2CD) is a quantitative index pointed out by the American Thoracic Society and the European Respiratory Society to classify respiratory crackles as fine or coarse. However, this index, measured in the time domain, is highly affected by noise and filters of recording systems. Such factors hamper the analysis of data reported by different research groups. This work proposes a new index based on the instantaneous frequency of crackles estimated by means of discrete-time pseudo Wigner-Ville distribution. METHOD Comparisons between 2CD and the proposed index were carried out for simulated and actual crackles. Normal breathing sounds were added to simulated crackles; the resulting signals were then applied to a band-pass filter that mimics those belonging to lung sound acquisition systems. Thus, the impact of noise and filtering on these two indices was assessed for simulated crackles. Kruskal-Wallis and Dunn's tests as well as Gaussian mixture model (GMM) were applied to the two indices measured from 382 actual crackles belonging to open databases. RESULTS The proposed index is much less susceptible to waveform distortions due to noise and filtering when compared to the 2CD. Thus, the statistical analyses allow the identification of two classes of crackles from actual databases; the same does not occur when using 2CD. CONCLUSIONS The new proposed index has the potential to contribute for a better characterization of crackles generated by different respiratory diseases, assisting their diagnosis during clinical exams.
Collapse
Affiliation(s)
- Carlos G Speranza
- Electronic Academic Department (DAELN), Federal Institute of Santa Catarina (IFSC), Av. Mauro Ramos, 950, Florianopolis/SC, 88020-300, Brazil.
| | - Raimes Moraes
- Electrical and Electronic Engineering Department (EEL), Federal University of Santa Catarina (UFSC), Campus Universitario Reitor João David Ferreira Lima, Rua Delfino Conti, s/n, Trindade, Florianopolis/SC, 88040-370, Brazil.
| |
Collapse
|
89
|
Abstract
Wearable sensors are already impacting healthcare and medicine by enabling health monitoring outside of the clinic and prediction of health events. This paper reviews current and prospective wearable technologies and their progress toward clinical application. We describe technologies underlying common, commercially available wearable sensors and early-stage devices and outline research, when available, to support the use of these devices in healthcare. We cover applications in the following health areas: metabolic, cardiovascular and gastrointestinal monitoring; sleep, neurology, movement disorders and mental health; maternal, pre- and neo-natal care; and pulmonary health and environmental exposures. Finally, we discuss challenges associated with the adoption of wearable sensors in the current healthcare ecosystem and discuss areas for future research and development.
Collapse
Affiliation(s)
- Jessilyn Dunn
- Department of Genetics, Stanford University, Stanford, CA 94305, USA.,Department of Bioengineering, Stanford University, Stanford, CA 94305, USA.,Mobilize Center, Stanford University, Stanford, CA 94305 USA
| | - Ryan Runge
- Department of Genetics, Stanford University, Stanford, CA 94305, USA.,Department of Bioengineering, Stanford University, Stanford, CA 94305, USA.,Mobilize Center, Stanford University, Stanford, CA 94305 USA
| | - Michael Snyder
- Department of Genetics, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
90
|
Esmaeili N, Rabbani H, Makaremi S, Golabbakhsh M, Saghaei M, Parviz M, Naghibi K. Tracheal Sound Analysis for Automatic Detection of Respiratory Depression in Adult Patients during Cataract Surgery under Sedation. JOURNAL OF MEDICAL SIGNALS & SENSORS 2018; 8:140-146. [PMID: 30181962 PMCID: PMC6116314 DOI: 10.4103/jmss.jmss_67_16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Background Tracheal sound analysis is a simple way to study the abnormalities of upper airway like airway obstruction. Hence, it may be an effective method for detection of alveolar hypoventilation and respiratory depression. This study was designed to investigate the importance of tracheal sound analysis to detect respiratory depression during cataract surgery under sedation. Methods: After Institutional Ethical Committee approval and informed patients' consent, we studied thirty adults American Society of Anesthesiologists I and II patients scheduled for cataract surgery under sedation anesthesia. Recording of tracheal sounds started 1 min before administration of sedative drugs using a microphone. Recorded sounds were examined by the anesthesiologist to detect periods of respiratory depression longer than 10 s. Then, tracheal sound signals converted to spectrogram images, and image processing was done to detect respiratory depression. Finally, depression periods detected from tracheal sound analysis were compared to the depression periods detected by the anesthesiologist. Results We extracted five features from spectrogram images of tracheal sounds for the detection of respiratory depression. Then, decision tree and support vector machine (SVM) with Radial Basis Function (RBF) kernel were used to classify the data using these features, where the designed decision tree outperforms the SVM with a sensitivity of 89% and specificity of 97%. Conclusions The results of this study show that morphological processing of spectrogram images of tracheal sound signals from a microphone placed over suprasternal notch may reliably provide an early warning of respiratory depression and the onset of airway obstruction in patients under sedation.
Collapse
Affiliation(s)
- Neda Esmaeili
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences.,Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Hossein Rabbani
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences.,Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Soheila Makaremi
- Department of Anesthesia, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Marzieh Golabbakhsh
- Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran.,Department of Biomedical Engineering, Faculty of Medicine, McGill University, Quebec, Canada
| | - Mahmoud Saghaei
- Department of Anesthesia, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mehdi Parviz
- Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Khosro Naghibi
- Department of Anesthesia, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
91
|
Hong K, Essid S, Ser W, Foo DG. A robust audio classification system for detecting pulmonary edema. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2018.07.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
92
|
Andrès E, Gass R, Charloux A, Brandt C, Hentzler A. Respiratory sound analysis in the era of evidence-based medicine and the world of medicine 2.0. J Med Life 2018; 11:89-106. [PMID: 30140315 PMCID: PMC6101681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Accepted: 04/10/2018] [Indexed: 12/03/2022] Open
Abstract
OBJECTIVE This paper describes the state of the art, scientific publications, and ongoing research related to the methods of analysis of respiratory sounds. METHODS AND MATERIAL Narrative review of the current medical and technological literature using Pubmed and personal experience. RESULTS We outline the various techniques that are currently being used to collect auscultation sounds and provide a physical description of known pathological sounds for which automatic detection tools have been developed. Modern tools are based on artificial intelligence and techniques such as artificial neural networks, fuzzy systems, and genetic algorithms. CONCLUSION The next step will consist of finding new markers to increase the efficiency of decision-aiding algorithms and tools.
Collapse
Affiliation(s)
- E Andrès
- Department of Internal Medicine, Clinique Médicale B, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | - R Gass
- Technical Academy Fellow, Alcatel-Lucent, Independent expert, Bolsenheim, France
| | - A Charloux
- Department of Physiology and Lung Function Exploration, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | - C Brandt
- Department of Cardiology, Clinique Médicale B, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | - A Hentzler
- Physics Engineer, General Director INCOTEC, Illkirch Graffenstaden, France
| |
Collapse
|
93
|
Pasterkamp H. The highs and lows of wheezing: A review of the most popular adventitious lung sound. Pediatr Pulmonol 2018; 53:243-254. [PMID: 29266880 DOI: 10.1002/ppul.23930] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2017] [Accepted: 11/26/2017] [Indexed: 12/22/2022]
Abstract
Wheezing is the most widely reported adventitious lung sound in the English language. It is recognized by health professionals as well as by lay people, although often with a different meaning. Wheezing is an indicator of airway obstruction and therefore of interest particularly for the assessment of young children and in other situations where objective documentation of lung function is not generally available. This review summarizes our current understanding of mechanisms producing wheeze, its subjective perception and description, its objective measurement, and visualization, and its relevance in clinical practice.
Collapse
|
94
|
Rocha BM, Filos D, Mendes L, Vogiatzis I, Perantoni E, Kaimakamis E, Natsiavas P, Oliveira A, Jácome C, Marques A, Paiva RP, Chouvarda I, Carvalho P, Maglaveras N. Α Respiratory Sound Database for the Development of Automated Classification. PRECISION MEDICINE POWERED BY PHEALTH AND CONNECTED HEALTH 2018. [DOI: 10.1007/978-981-10-7419-6_6] [Citation(s) in RCA: 63] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
95
|
An Automated Lung Sound Preprocessing and Classification System Based OnSpectral Analysis Methods. PRECISION MEDICINE POWERED BY PHEALTH AND CONNECTED HEALTH 2018. [DOI: 10.1007/978-981-10-7419-6_8] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|