1
|
Zhao Q, Geng S, Wang B, Sun Y, Nie W, Bai B, Yu C, Zhang F, Tang G, Zhang D, Zhou Y, Liu J, Hong S. Deep Learning in Heart Sound Analysis: From Techniques to Clinical Applications. HEALTH DATA SCIENCE 2024; 4:0182. [PMID: 39387057 PMCID: PMC11461928 DOI: 10.34133/hds.0182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 08/09/2024] [Accepted: 08/13/2024] [Indexed: 10/12/2024]
Abstract
Importance: Heart sound auscultation is a routinely used physical examination in clinical practice to identify potential cardiac abnormalities. However, accurate interpretation of heart sounds requires specialized training and experience, which limits its generalizability. Deep learning, a subset of machine learning, involves training artificial neural networks to learn from large datasets and perform complex tasks with intricate patterns. Over the past decade, deep learning has been successfully applied to heart sound analysis, achieving remarkable results and accumulating substantial heart sound data for model training. Although several reviews have summarized deep learning algorithms for heart sound analysis, there is a lack of comprehensive summaries regarding the available heart sound data and the clinical applications. Highlights: This review will compile the commonly used heart sound datasets, introduce the fundamentals and state-of-the-art techniques in heart sound analysis and deep learning, and summarize the current applications of deep learning for heart sound analysis, along with their limitations and areas for future improvement. Conclusions: The integration of deep learning into heart sound analysis represents a significant advancement in clinical practice. The growing availability of heart sound datasets and the continuous development of deep learning techniques contribute to the improvement and broader clinical adoption of these models. However, ongoing research is needed to address existing challenges and refine these technologies for broader clinical use.
Collapse
Affiliation(s)
- Qinghao Zhao
- Department of Cardiology,
Peking University People’s Hospital, Beijing, China
| | | | - Boya Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Gastrointestinal Oncology,
Peking University Cancer Hospital and Institute, Beijing, China
| | - Yutong Sun
- Department of Cardiology,
Peking University People’s Hospital, Beijing, China
| | - Wenchang Nie
- Department of Cardiology,
Peking University People’s Hospital, Beijing, China
| | - Baochen Bai
- Department of Cardiology,
Peking University People’s Hospital, Beijing, China
| | - Chao Yu
- Department of Cardiology,
Peking University People’s Hospital, Beijing, China
| | - Feng Zhang
- Department of Cardiology,
Peking University People’s Hospital, Beijing, China
| | - Gongzheng Tang
- National Institute of Health Data Science,
Peking University, Beijing, China
- Institute of Medical Technology,
Health Science Center of Peking University, Beijing, China
| | | | - Yuxi Zhou
- Department of Computer Science,
Tianjin University of Technology, Tianjin, China
- DCST, BNRist, RIIT, Institute of Internet Industry,
Tsinghua University, Beijing, China
| | - Jian Liu
- Department of Cardiology,
Peking University People’s Hospital, Beijing, China
| | - Shenda Hong
- National Institute of Health Data Science,
Peking University, Beijing, China
- Institute of Medical Technology,
Health Science Center of Peking University, Beijing, China
| |
Collapse
|
2
|
Al-Zaben A, Al-Fahoum A, Ababneh M, Al-Naami B, Al-Omari G. Improved recovery of cardiac auscultation sounds using modified cosine transform and LSTM-based masking. Med Biol Eng Comput 2024; 62:2485-2497. [PMID: 38627355 DOI: 10.1007/s11517-024-03088-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 04/02/2024] [Indexed: 04/24/2024]
Abstract
Obtaining accurate cardiac auscultation signals, including basic heart sounds (S1 and S2) and subtle signs of disease, is crucial for improving cardiac diagnoses and making the most of telehealth. This research paper introduces an innovative approach that utilizes a modified cosine transform (MCT) and a masking strategy based on long short-term memory (LSTM) to effectively distinguish heart sounds and murmurs from background noise and interfering sounds. The MCT is used to capture the repeated pattern of the heart sounds, while the LSTMs are trained to construct masking based on the repeated MCT spectrum. The proposed strategy's performance in maintaining the clinical relevance of heart sounds continues to demonstrate effectiveness, even in environments marked by increased noise and complex disruptions. The present work highlights the clinical significance and reliability of the suggested methodology through in-depth signal visualization and rigorous statistical performance evaluations. In comparative assessments, the proposed approach has demonstrated superior performance compared to recent algorithms, such as LU-Net and PC-DAE. Furthermore, the system's adaptability to various datasets enhances its reliability and practicality. The suggested method is a potential way to improve the accuracy of cardiovascular diagnostics in an era of rapid advancement in medical signal processing. The proposed approach showed an enhancement in the average signal-to-noise ratio (SNR) by 9.6 dB at an input SNR of - 6 dB and by 3.3 dB at an input SNR of 10 dB. The average signal distortion ratio (SDR) achieved across a variety of input SNR values was 8.56 dB.
Collapse
Affiliation(s)
- Awad Al-Zaben
- Biomedical Engineering Department, Engineering Faculty, Hashemite University, Zarqa, Jordan.
- Biomedical Systems and Medical Informatics Department, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan.
| | - Amjad Al-Fahoum
- Biomedical Systems and Medical Informatics Department, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| | - Muhannad Ababneh
- Faculty of Medicine, Interventional Cardiologist, Jordan University of Science and Technology, Irbid, Jordan
| | - Bassam Al-Naami
- Biomedical Engineering Department, Engineering Faculty, Hashemite University, Zarqa, Jordan
| | - Ghadeer Al-Omari
- Biomedical Systems and Medical Informatics Department, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| |
Collapse
|
3
|
Poh YY, Grooby E, Tan K, Zhou L, King A, Ramanathan A, Malhotra A, Harandi M, Marzbanrad F. NeoSSNet: Real-Time Neonatal Chest Sound Separation Using Deep Learning. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:345-352. [PMID: 38899018 PMCID: PMC11186644 DOI: 10.1109/ojemb.2024.3401571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 03/20/2024] [Accepted: 05/08/2024] [Indexed: 06/21/2024] Open
Abstract
Goal: Auscultation for neonates is a simple and non-invasive method of diagnosing cardiovascular and respiratory disease. However, obtaining high-quality chest sounds containing only heart or lung sounds is non-trivial. Hence, this study introduces a new deep-learning model named NeoSSNet and evaluates its performance in neonatal chest sound separation with previous methods. Methods: We propose a masked-based architecture similar to Conv-TasNet. The encoder and decoder consist of 1D convolution and 1D transposed convolution, while the mask generator consists of a convolution and transformer architecture. The input chest sounds were first encoded as a sequence of tokens using 1D convolution. The tokens were then passed to the mask generator to generate two masks, one for heart sounds and one for lung sounds. Each mask is then applied to the input token sequence. Lastly, the tokens are converted back to waveforms using 1D transposed convolution. Results: Our proposed model showed superior results compared to the previous methods based on objective distortion measures, ranging from a 2.01 dB improvement to a 5.06 dB improvement. The proposed model is also significantly faster than the previous methods, with at least a 17-time improvement. Conclusions: The proposed model could be a suitable preprocessing step for any health monitoring system where only the heart sound or lung sound is desired.
Collapse
Affiliation(s)
- Yang Yi Poh
- Department of Electrical and Computer Systems EngineeringMonash University, MelbourneClaytonVIC3800Australia
| | - Ethan Grooby
- Department of Electrical and Computer Systems EngineeringMonash University, MelbourneClaytonVIC3800Australia
- BC Children's Hospital Research Institute and the Department of Electrical and Computer EngineeringUniversity of British ColumbiaVancouverBCV6T 1Z4Canada
| | - Kenneth Tan
- Monash Newborn, Monash Children's Hospital and Department of PaediatricsMonash University, MelbourneClaytonVIC3800Australia
| | - Lindsay Zhou
- Monash Newborn, Monash Children's Hospital and Department of PaediatricsMonash University, MelbourneClaytonVIC3800Australia
| | - Arrabella King
- Monash Newborn, Monash Children's Hospital and Department of PaediatricsMonash University, MelbourneClaytonVIC3800Australia
| | - Ashwin Ramanathan
- Monash Newborn, Monash Children's Hospital and Department of PaediatricsMonash University, MelbourneClaytonVIC3800Australia
| | - Atul Malhotra
- Monash Newborn, Monash Children's Hospital and Department of PaediatricsMonash University, MelbourneClaytonVIC3800Australia
| | - Mehrtash Harandi
- Department of Electrical and Computer Systems EngineeringMonash University, MelbourneClaytonVIC3800Australia
| | - Faezeh Marzbanrad
- Department of Electrical and Computer Systems EngineeringMonash University, MelbourneClaytonVIC3800Australia
| |
Collapse
|
4
|
Sabry AH, I. Dallal Bashi O, Nik Ali N, Mahmood Al Kubaisi Y. Lung disease recognition methods using audio-based analysis with machine learning. Heliyon 2024; 10:e26218. [PMID: 38420389 PMCID: PMC10900411 DOI: 10.1016/j.heliyon.2024.e26218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/11/2023] [Accepted: 02/08/2024] [Indexed: 03/02/2024] Open
Abstract
The use of computer-based automated approaches and improvements in lung sound recording techniques have made lung sound-based diagnostics even better and devoid of subjectivity errors. Using a computer to evaluate lung sound features more thoroughly with the use of analyzing changes in lung sound behavior, recording measurements, suppressing the presence of noise contaminations, and graphical representations are all made possible by computer-based lung sound analysis. This paper starts with a discussion of the need for this research area, providing an overview of the field and the motivations behind it. Following that, it details the survey methodology used in this work. It presents a discussion on the elements of sound-based lung disease classification using machine learning algorithms. This includes commonly prior considered datasets, feature extraction techniques, pre-processing methods, artifact removal methods, lung-heart sound separation, deep learning algorithms, and wavelet transform of lung audio signals. The study introduces studies that review lung screening including a summary table of these references and discusses the literature gaps in the existing studies. It is concluded that the use of sound-based machine learning in the classification of respiratory diseases has promising results. While we believe this material will prove valuable to physicians and researchers exploring sound-signal-based machine learning, large-scale investigations remain essential to solidify the findings and foster wider adoption within the medical community.
Collapse
Affiliation(s)
- Ahmad H. Sabry
- Department of Medical Instrumentation Engineering Techniques, Shatt Al-Arab University College, Basra, Iraq
| | - Omar I. Dallal Bashi
- Medical Technical Institute, Northern Technical University, 95G2+P34, Mosul, 41002, Iraq
| | - N.H. Nik Ali
- School of Electrical Engineering, College of Engineering, Universiti Teknologi MARA, 40450 Shah Alam, Selangor, Malaysia
| | - Yasir Mahmood Al Kubaisi
- Department of Sustainability Management, Dubai Academic Health Corporation, Dubai, 4545, United Arab Emirates
| |
Collapse
|
5
|
Huang DM, Huang J, Qiao K, Zhong NS, Lu HZ, Wang WJ. Deep learning-based lung sound analysis for intelligent stethoscope. Mil Med Res 2023; 10:44. [PMID: 37749643 PMCID: PMC10521503 DOI: 10.1186/s40779-023-00479-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 09/05/2023] [Indexed: 09/27/2023] Open
Abstract
Auscultation is crucial for the diagnosis of respiratory system diseases. However, traditional stethoscopes have inherent limitations, such as inter-listener variability and subjectivity, and they cannot record respiratory sounds for offline/retrospective diagnosis or remote prescriptions in telemedicine. The emergence of digital stethoscopes has overcome these limitations by allowing physicians to store and share respiratory sounds for consultation and education. On this basis, machine learning, particularly deep learning, enables the fully-automatic analysis of lung sounds that may pave the way for intelligent stethoscopes. This review thus aims to provide a comprehensive overview of deep learning algorithms used for lung sound analysis to emphasize the significance of artificial intelligence (AI) in this field. We focus on each component of deep learning-based lung sound analysis systems, including the task categories, public datasets, denoising methods, and, most importantly, existing deep learning methods, i.e., the state-of-the-art approaches to convert lung sounds into two-dimensional (2D) spectrograms and use convolutional neural networks for the end-to-end recognition of respiratory diseases or abnormal lung sounds. Additionally, this review highlights current challenges in this field, including the variety of devices, noise sensitivity, and poor interpretability of deep models. To address the poor reproducibility and variety of deep learning in this field, this review also provides a scalable and flexible open-source framework that aims to standardize the algorithmic workflow and provide a solid basis for replication and future extension: https://github.com/contactless-healthcare/Deep-Learning-for-Lung-Sound-Analysis .
Collapse
Affiliation(s)
- Dong-Min Huang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
| | - Jia Huang
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China
| | - Kun Qiao
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China
| | - Nan-Shan Zhong
- Guangzhou Institute of Respiratory Health, China State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510120, China.
| | - Hong-Zhou Lu
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China.
| | - Wen-Jin Wang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
6
|
Wang W, Qin D, Wang S, Fang Y, Zheng Y. A multi-channel UNet framework based on SNMF-DCNN for robust heart-lung-sound separation. Comput Biol Med 2023; 164:107282. [PMID: 37499297 DOI: 10.1016/j.compbiomed.2023.107282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 06/14/2023] [Accepted: 07/16/2023] [Indexed: 07/29/2023]
Abstract
Cardiopulmonary and cardiovascular diseases are fatal factors that threaten human health and cause many deaths worldwide each year, so it is essential to screen cardiopulmonary disease more accurately and efficiently. Auscultation is a non-invasive method for physicians' perception of the disease. The Heart Sounds (HS) and Lung Sounds (LS) recorded by an electronic stethoscope consist of acoustic information that is helpful in the diagnosis of pulmonary conditions. Still, inter-interference between HS and LS presented in both the time and frequency domains blocks diagnostic efficiency. This paper proposes a blind source separation (BSS)strategy that first classifies Heart-Lung-Sound (HLS) according to its LS features and then separates it into HS and LS. Sparse Non-negative Matrix Factorization (SNMF) is employed to extract the LS features in HLS, then proposed a network constructed by Dilated Convolutional Neural Network (DCNN) to classify HLS into five types by the magnitude features of LS. Finally, Multi-Channel UNet (MCUNet) separation model is utilized for each category of HLS. This paper is the first to propose the HLS classification method SNMF-DCNN and apply UNet to the cardiopulmonary sound separation domain. Compared with other state-of-the-art methods, the proposed framework in this paper has higher separation quality and robustness.
Collapse
Affiliation(s)
- Weibo Wang
- College of Electrical and Electronic Information, Xihua University, Chengdu, 610036, China.
| | - Dimei Qin
- College of Electrical and Electronic Information, Xihua University, Chengdu, 610036, China
| | - Shubo Wang
- College of Electrical and Electronic Information, Xihua University, Chengdu, 610036, China
| | - Yu Fang
- College of Electrical and Electronic Information, Xihua University, Chengdu, 610036, China
| | - Yongkang Zheng
- State Grid Sichuan Electric Power Research Institute, Chengdu, 610096, China
| |
Collapse
|
7
|
Yang C, Hu N, Xu D, Wang Z, Cai S. Monaural cardiopulmonary sound separation via complex-valued deep autoencoder and cyclostationarity. Biomed Phys Eng Express 2023; 9. [PMID: 36796095 DOI: 10.1088/2057-1976/acbc7f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 02/16/2023] [Indexed: 02/18/2023]
Abstract
Objective.Cardiopulmonary auscultation is promising to get smart due to the emerging of electronic stethoscopes. Cardiac and lung sounds often appear mixed at both time and frequency domain, hence deteriorating the auscultation quality and the further diagnosis performance. The conventional cardiopulmonary sound separation methods may be challenged by the diversity in cardiac/lung sounds. In this study, the data-driven feature learning advantage of deep autoencoder and the common quasi-cyclostationarity characteristic are exploited for monaural separation.Approach.Different from most of the existing separation methods that only handle the amplitude of short-time Fourier transform (STFT) spectrum, a complex-valued U-net (CUnet) with deep autoencoder structure, is built to fully exploit both the amplitude and phase information. As a common characteristic of cardiopulmonary sounds, quasi-cyclostationarity of cardiac sound is involved in the loss function for training.Main results. In experiments to separate cardiac/lung sounds for heart valve disorder auscultation, the averaged achieved signal distortion ratio (SDR), signal interference ratio (SIR), and signal artifact ratio (SAR) in cardiac sounds are 7.84 dB, 21.72 dB, and 8.06 dB, respectively. The detection accuracy of aortic stenosis can be raised from 92.21% to 97.90%.Significance. The proposed method can promote the cardiopulmonary sound separation performance, and may improve the detection accuracy for cardiopulmonary diseases.
Collapse
Affiliation(s)
- Chunjian Yang
- School of Electronics and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Nan Hu
- School of Electronics and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Dongyang Xu
- Center for Intelligent Acoustics and Signal Processing, Huzhou Institute of Zhejiang University, Huzhou 313000, People's Republic of China
| | - Zhi Wang
- Center for Intelligent Acoustics and Signal Processing, Huzhou Institute of Zhejiang University, Huzhou 313000, People's Republic of China
| | - Shengsheng Cai
- Center for Intelligent Acoustics and Signal Processing, Huzhou Institute of Zhejiang University, Huzhou 313000, People's Republic of China.,Suzhou Melodicare Medical Technology Co., Ltd, Suzhou 215151, People's Republic of China
| |
Collapse
|
8
|
Yang C, Dai N, Wang Z, Cai S, Wang J, Hu N. Cardiopulmonary auscultation enhancement with a two-stage noise cancellation approach. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
Wang W, Wang S, Qin D, Fang Y, Zheng Y. Heart-lung sound separation by nonnegative matrix factorization and deep learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
10
|
Alqudah AM, Qazan S, Obeidat YM. Deep learning models for detecting respiratory pathologies from raw lung auscultation sounds. Soft comput 2022; 26:13405-13429. [PMID: 36186666 PMCID: PMC9510581 DOI: 10.1007/s00500-022-07499-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/09/2022] [Indexed: 11/23/2022]
Abstract
In recent years deep learning models improve the diagnosis performance of many diseases especially respiratory diseases. This paper will propose an evaluation for the performance of different deep learning models associated with the raw lung auscultation sounds in detecting respiratory pathologies to help in providing diagnostic of respiratory pathologies in digital recorded respiratory sounds. Also, we will find out the best deep learning model for this task. In this paper, three different deep learning models have been evaluated on non-augmented and augmented datasets, where two different datasets have been utilized to generate four different sub-datasets. The results show that all the proposed deep learning methods were successful and achieved high performance in classifying the raw lung sounds, the methods were applied on different datasets and used either augmentation or non-augmentation. Among all proposed deep learning models, the CNN–LSTM model was the best model in all datasets for both augmentation and non-augmentation cases. The accuracy of CNN–LSTM model using non-augmentation was 99.6%, 99.8%, 82.4%, and 99.4% for datasets 1, 2, 3, and 4, respectively, and using augmentation was 100%, 99.8%, 98.0%, and 99.5% for datasets 1, 2, 3, and 4, respectively. While the augmentation process successfully helps the deep learning models in enhancing their performance on the testing datasets with a notable value. Moreover, the hybrid model that combines both CNN and LSTM techniques performed better than models that are based only on one of these techniques, this mainly refers to the use of CNN for automatic deep features extraction from lung sound while LSTM is used for classification.
Collapse
Affiliation(s)
- Ali Mohammad Alqudah
- Department of Biomedical Systems and Informatics Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| | - Shoroq Qazan
- Department of Computer Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| | - Yusra M Obeidat
- Department of Electronic Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| |
Collapse
|
11
|
A lightweight hybrid deep learning system for cardiac valvular disease classification. Sci Rep 2022; 12:14297. [PMID: 35995814 PMCID: PMC9395359 DOI: 10.1038/s41598-022-18293-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 08/09/2022] [Indexed: 12/21/2022] Open
Abstract
Cardiovascular diseases (CVDs) are a prominent cause of death globally. The introduction of medical big data and Artificial Intelligence (AI) technology encouraged the effort to develop and deploy deep learning models for distinguishing heart sound abnormalities. These systems employ phonocardiogram (PCG) signals because of their lack of sophistication and cost-effectiveness. Automated and early diagnosis of cardiovascular diseases (CVDs) helps alleviate deadly complications. In this research, a cardiac diagnostic system that combined CNN and LSTM components was developed, it uses phonocardiogram (PCG) signals, and utilizes either augmented or non-augmented datasets. The proposed model discriminates five heart valvular conditions, namely normal, Aortic Stenosis (AS), Mitral Regurgitation (MR), Mitral Stenosis (MS), and Mitral Valve Prolapse (MVP). The findings demonstrate that the suggested end-to-end architecture yields outstanding performance concerning all important evaluation metrics. For the five classes problem using the open heart sound dataset, accuracy was 98.5%, F1-score was 98.501%, and Area Under the Curve (AUC) was 0.9978 for the non-augmented dataset and accuracy was 99.87%, F1-score was 99.87%, and AUC was 0.9985 for the augmented dataset. Model performance was further evaluated using the PhysioNet/Computing in Cardiology 2016 challenge dataset, for the two classes problem, accuracy was 93.76%, F1-score was 85.59%, and AUC was 0.9505. The achieved results show that the proposed system outperforms all previous works that use the same audio signal databases. In the future, the findings will help build a multimodal structure that uses both PCG and ECG signals.
Collapse
|
12
|
Wavelet and Spectral Analysis of Normal and Abnormal Heart Sound for Diagnosing Cardiac Disorders. BIOMED RESEARCH INTERNATIONAL 2022; 2022:9092346. [PMID: 35937404 PMCID: PMC9348924 DOI: 10.1155/2022/9092346] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 06/02/2022] [Accepted: 07/07/2022] [Indexed: 11/26/2022]
Abstract
Body auscultation is a frequent clinical diagnostic procedure used to diagnose heart problems. The key advantage of this clinical method is that it provides a cheap and effective solution that enables medical professionals to interpret heart sounds for the diagnosis of cardiac diseases. Signal processing can quantify the distribution of amplitude and frequency content for diagnostic purposes. In this experiment, the use of signal processing and wavelet analysis in screening cardiac disorders provided enough evidence to distinguish between the heart sounds of a healthy and unhealthy heart. Real-time data was collected using an IoT device, and the noise was reduced using the REES52 sensor. It was found that mean frequency is sufficiently discriminatory to distinguish between a healthy and unhealthy heart, according to features derived from signal amplitude distribution in the time and frequency domain analysis. The results of the present study indicate the adequate discrimination between the characteristics of heart sounds for automatic detection of cardiac problems by signal processing from normal and abnormal heart sounds.
Collapse
|
13
|
Wang G, Yang Y, Chen S, Fu J, Wu D, Yang A, Ma Y, Feng X. Flexible dual-channel digital auscultation patch with active noise reduction for bowel sound monitoring and application. IEEE J Biomed Health Inform 2022; 26:2951-2962. [PMID: 35171784 DOI: 10.1109/jbhi.2022.3151927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Bowel sounds (BSs) have important clinical value in the auxiliary diagnosis of digestive diseases, but due to the inconvenience of long-term monitoring and too much interference from environmental noise, they have not been well studied. Most of the current electronic stethoscopes are hard and bulky without the function of noise reduction, and their application for long-term wearable monitoring of BS in noisy clinical environments is very limited. In this paper, a flexible dual-channel digital auscultation patch with active noise reduction is designed and developed, which is wireless, wearable, and conformably attached to abdominal skin to record BS more accurately. The ambient noise can be greatly reduced through active noise reduction based on the adaptive filter. At the same time, some nonstationary noises appearing intermittently (e.g., frictional noise) can also be removed from BS by the cross validation of multichannel simultaneous acquisition. Then, two kinds of typical BS signals are taken as examples, and the feature parameters of the BS in the time domain and frequency domain are extracted through the time-frequency analysis algorithm. Furthermore, based on the short-term energy ratio between the four channels of dual patches, the two-dimensional localization of BS on the abdomen mapping plane is realized. Finally, the continuous wearable monitoring of BS for patients with postoperative ileus (POI) in the noisy ward from pre-operation (POD0) to postoperative Day 7 (POD7) was carried out. The obtained change curve of the occurrence frequency of BS provides guidance for doctors to choose a reasonable feeding time for patients after surgery and accelerate their recovery. Therefore, flexible dual-channel digital auscultation patches with active noise reduction will have promising applications in the clinical auxiliary diagnosis of digestive diseases.
Collapse
|