1
|
Chandra J, Lin R, Kancherla D, Scott S, Sul D, Andrade D, Marzouk S, Iyer JM, Wasswa W, Villanueva C, Celi LA. Low-cost and convenient screening of disease using analysis of physical measurements and recordings. PLOS DIGITAL HEALTH 2024; 3:e0000574. [PMID: 39298384 DOI: 10.1371/journal.pdig.0000574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/21/2024]
Abstract
In recent years, there has been substantial work in low-cost medical diagnostics based on the physical manifestations of disease. This is due to advancements in data analysis techniques and classification algorithms and the increased availability of computing power through smart devices. Smartphones and their ability to interface with simple sensors such as inertial measurement units (IMUs), microphones, piezoelectric sensors, etc., or with convenient attachments such as lenses have revolutionized the ability collect medically relevant data easily. Even if the data has relatively low resolution or signal to noise ratio, newer algorithms have made it possible to identify disease with this data. Many low-cost diagnostic tools have been created in medical fields spanning from neurology to dermatology to obstetrics. These tools are particularly useful in low-resource areas where access to expensive diagnostic equipment may not be possible. The ultimate goal would be the creation of a "diagnostic toolkit" consisting of a smartphone and a set of sensors and attachments that can be used to screen for a wide set of diseases in a community healthcare setting. However, there are a few concerns that still need to be overcome in low-cost diagnostics: lack of incentives to bring these devices to market, algorithmic bias, "black box" nature of the algorithms, and data storage/transfer concerns.
Collapse
Affiliation(s)
- Jay Chandra
- Harvard Medical School, Harvard University, Boston, Massachusetts, United States of America
- Global Alliance for Medical Innovation, Cambridge, Massachusetts, United States of America
| | - Raymond Lin
- Global Alliance for Medical Innovation, Cambridge, Massachusetts, United States of America
- Harvard College, Harvard University, Boston, Massachusetts, United States of America
| | - Devin Kancherla
- Global Alliance for Medical Innovation, Cambridge, Massachusetts, United States of America
- Harvard College, Harvard University, Boston, Massachusetts, United States of America
| | - Sophia Scott
- Global Alliance for Medical Innovation, Cambridge, Massachusetts, United States of America
- Harvard College, Harvard University, Boston, Massachusetts, United States of America
| | - Daniel Sul
- Global Alliance for Medical Innovation, Cambridge, Massachusetts, United States of America
- Duke University, Durham, North Carolina, United States of America
| | - Daniela Andrade
- Global Alliance for Medical Innovation, Cambridge, Massachusetts, United States of America
- Harvard College, Harvard University, Boston, Massachusetts, United States of America
| | - Sammer Marzouk
- Global Alliance for Medical Innovation, Cambridge, Massachusetts, United States of America
- Harvard College, Harvard University, Boston, Massachusetts, United States of America
| | - Jay M Iyer
- Global Alliance for Medical Innovation, Cambridge, Massachusetts, United States of America
- Harvard College, Harvard University, Boston, Massachusetts, United States of America
| | - William Wasswa
- Department of Biomedical Sciences and Engineering, Mbarara University of Science and Technology, Mbarara, Uganda
| | - Cleva Villanueva
- Escuela Superior de Medicina, Instituto Politécnico Nacional, México, D.F., México
| | - Leo Anthony Celi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, United States of America
| |
Collapse
|
2
|
Zhou S, Huang L, Zhong X. Application of Ward Noise Management in Perioperative Hepatobiliary Surgery: A Retrospective Study. Noise Health 2024; 26:272-279. [PMID: 39345064 PMCID: PMC11539982 DOI: 10.4103/nah.nah_23_24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 04/12/2024] [Accepted: 04/15/2024] [Indexed: 10/01/2024] Open
Abstract
OBJECTIVE To explore the application effect of ward noise management during the perioperative period of hepatobiliary surgery. METHODS The clinical data of 295 patients undergoing hepatobiliary surgery admitted to People's Hospital of Zunyi City Bo Zhou District from March 2020 to March 2023 were retrospectively analyzed. In accordance with different perioperative management programs, patients were divided into the control (implementation of perioperative routine management) and observation (implementation of perioperative routine management + ward noise management) groups. Patients' general data were matched through propensity score matching, and 55 cases were allocated to each group. After matching, the clinical indicators of the two groups were compared to evaluate the effect of ward noise management on patients undergoing hepatobiliary surgery. RESULTS No significant difference in general data was found between the two groups (P > 0.05). After management, the postoperative recovery indicators, such as feeding time, exhaust time, defecation time, first time to get out of bed, and incidence of postoperative complications, did not significantly differ between the observation and control groups (P > 0.05). The Hamilton Anxiety Scale, Hamilton Depression Scale and Pittsburgh Sleep Quality Index scores of the observation group were lower than those of the control group (P < 0.05). The average noise decibel values during the day, night, and over 24 hours of the observation group were lower than those of the control group (P < 0.05). CONCLUSIONS Ward noise management can improve the negative emotions of patients undergoing hepatobiliary surgery, enhance sleep quality, and promote recovery. Therefore, it has a certain clinical promotion value.
Collapse
Affiliation(s)
- Shaobi Zhou
- Department of Hepatobiliary and Pancreatic Surgery, People’s Hospital of Zunyi City Bo Zhou District, Zunyi 563100, Guizhou, China
| | - Ling Huang
- Department of Hepatobiliary and Pancreatic Surgery, People’s Hospital of Zunyi City Bo Zhou District, Zunyi 563100, Guizhou, China
| | - Xiaying Zhong
- Department of Health Care Ward, Zhongshan Hospital Xiamen University, Xiamen 361004, Fujian, China
| |
Collapse
|
3
|
Razvadauskas H, Vaičiukynas E, Buškus K, Arlauskas L, Nowaczyk S, Sadauskas S, Naudžiūnas A. Exploring classical machine learning for identification of pathological lung auscultations. Comput Biol Med 2024; 168:107784. [PMID: 38042100 DOI: 10.1016/j.compbiomed.2023.107784] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 11/17/2023] [Accepted: 11/28/2023] [Indexed: 12/04/2023]
Abstract
The use of machine learning in biomedical research has surged in recent years thanks to advances in devices and artificial intelligence. Our aim is to expand this body of knowledge by applying machine learning to pulmonary auscultation signals. Despite improvements in digital stethoscopes and attempts to find synergy between them and artificial intelligence, solutions for their use in clinical settings remain scarce. Physicians continue to infer initial diagnoses with less sophisticated means, resulting in low accuracy, leading to suboptimal patient care. To arrive at a correct preliminary diagnosis, the auscultation diagnostics need to be of high accuracy. Due to the large number of auscultations performed, data availability opens up opportunities for more effective sound analysis. In this study, digital 6-channel auscultations of 45 patients were used in various machine learning scenarios, with the aim of distinguishing between normal and abnormal pulmonary sounds. Audio features (such as fundamental frequencies F0-4, loudness, HNR, DFA, as well as descriptive statistics of log energy, RMS and MFCC) were extracted using the Python library Surfboard. Windowing, feature aggregation, and concatenation strategies were used to prepare data for machine learning algorithms in unsupervised (fair-cut forest, outlier forest) and supervised (random forest, regularized logistic regression) settings. The evaluation was carried out using 9-fold stratified cross-validation repeated 30 times. Decision fusion by averaging the outputs for a subject was also tested and found to be helpful. Supervised models showed a consistent advantage over unsupervised ones, with random forest achieving a mean AUC ROC of 0.691 (accuracy 71.11%, Kappa 0.416, F1-score 0.675) in side-based detection and a mean AUC ROC of 0.721 (accuracy 68.89%, Kappa 0.371, F1-score 0.650) in patient-based detection.
Collapse
|
4
|
Zauli M, Peppi LM, Di Bonaventura L, Arcobelli VA, Spadotto A, Diemberger I, Coppola V, Mellone S, De Marchi L. Exploring Microphone Technologies for Digital Auscultation Devices. MICROMACHINES 2023; 14:2092. [PMID: 38004949 PMCID: PMC10673215 DOI: 10.3390/mi14112092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 11/03/2023] [Accepted: 11/10/2023] [Indexed: 11/26/2023]
Abstract
The aim of this work is to present a preliminary study for the design of a digital auscultation system, i.e., a novel wearable device for patient chest auscultation and a digital stethoscope. The development and testing of the electronic stethoscope prototype is reported with an emphasis on the description and selection of sound transduction systems and analog electronic processing. The focus on various microphone technologies, such as micro-electro-mechanical systems (MEMSs), electret condensers, and piezoelectronic diaphragms, intends to emphasize the most suitable transducer for auscultation. In addition, we report on the design and development of a digital acquisition system for the human body for sound recording by using a modular device approach in order to fit the chosen analog and digital mics. Tests were performed on a designed phantom setup, and a qualitative comparison between the sounds recorded with the newly developed acquisition device and those recorded with two commercial digital stethoscopes is reported.
Collapse
Affiliation(s)
- Matteo Zauli
- ARCES—Advanced Research Center on Electronic Systems for Information and Communication Technologies “Ercole De Castro”, University of Bologna, 40136 Bologna, Italy; (M.Z.); (L.M.P.); (V.C.)
| | - Lorenzo Mistral Peppi
- ARCES—Advanced Research Center on Electronic Systems for Information and Communication Technologies “Ercole De Castro”, University of Bologna, 40136 Bologna, Italy; (M.Z.); (L.M.P.); (V.C.)
| | | | - Valerio Antonio Arcobelli
- Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi”, University of Bologna, 40136 Bologna, Italy; (V.A.A.); (S.M.)
| | - Alberto Spadotto
- Institute of Cardiology, Department of Medical and Surgical Sciences, University of Bologna, Policlinico S.Orsola-Malpighi, via Massarenti 9, 40138 Bologna, Italy; (A.S.); (I.D.)
- UOC di Cardiologia, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Dipartimento Cardio-Toraco-Vascolare, via Massarenti 9, 40138 Bologna, Italy
| | - Igor Diemberger
- Institute of Cardiology, Department of Medical and Surgical Sciences, University of Bologna, Policlinico S.Orsola-Malpighi, via Massarenti 9, 40138 Bologna, Italy; (A.S.); (I.D.)
- UOC di Cardiologia, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Dipartimento Cardio-Toraco-Vascolare, via Massarenti 9, 40138 Bologna, Italy
| | - Valerio Coppola
- ARCES—Advanced Research Center on Electronic Systems for Information and Communication Technologies “Ercole De Castro”, University of Bologna, 40136 Bologna, Italy; (M.Z.); (L.M.P.); (V.C.)
| | - Sabato Mellone
- Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi”, University of Bologna, 40136 Bologna, Italy; (V.A.A.); (S.M.)
| | - Luca De Marchi
- Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi”, University of Bologna, 40136 Bologna, Italy; (V.A.A.); (S.M.)
| |
Collapse
|
5
|
Shokouhmand A, Wen H, Khan S, Puma JA, Patel A, Green P, Ayazi F, Ebadi N. Diagnosis of Coexisting Valvular Heart Diseases Using Image-to-Sequence Translation of Contact Microphone Recordings. IEEE Trans Biomed Eng 2023; 70:2540-2551. [PMID: 37028021 DOI: 10.1109/tbme.2023.3253381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
Abstract
OBJECTIVE Development of a contact microphone-driven screening framework for the diagnosis of coexisting valvular heart diseases (VHDs). METHODS A sensitive accelerometer contact microphone (ACM) is employed to capture heart-induced acoustic components on the chest wall. Inspired by the human auditory system, ACM recordings are initially transformed into Mel-frequency cepstral coefficients (MFCCs) and their first and second derivatives, resulting in 3-channel images. An image-to-sequence translation network based on the convolution-meets-transformer (CMT) architecture is then applied to each image to find local and global dependencies in images, and predict a 5-digit binary sequence, where each digit corresponds to the presence of a specific type of VHD. The performance of the proposed framework is evaluated on 58 VHD patients and 52 healthy individuals using a 10-fold leave-subject-out cross-validation (10-LSOCV) approach. RESULTS Statistical analyses suggest an average sensitivity, specificity, accuracy, positive predictive value, and F1 score of 93.28%, 98.07%, 96.87%, 92.97%, and 92.4% respectively, for the detection of coexisting VHDs. Furthermore, areas under the curve (AUC) of 0.99 and 0.98 are respectively reported for the validation and test sets. CONCLUSION The high performances achieved prove that local and global features of ACM recordings effectively characterize heart murmurs associated with valvular abnormalities. SIGNIFICANCE Limited access of primary care physicians to echocardiography machines has resulted in a low sensitivity of 44% when using a stethoscope for the identification of heart murmurs. The proposed framework provides accurate decision-making on the presence of VHDs, thus reducing the number of undetected VHD patients in primary care settings.
Collapse
|
6
|
Rennoll V, McLane I, Eisape A, Grant D, Hahn H, Elhilali M, West JE. Electrostatic Acoustic Sensor with an Impedance-Matched Diaphragm Characterized for Body Sound Monitoring. ACS APPLIED BIO MATERIALS 2023; 6:3241-3256. [PMID: 37470762 PMCID: PMC10804910 DOI: 10.1021/acsabm.3c00359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/21/2023]
Abstract
Acoustic sensors are able to capture more incident energy if their acoustic impedance closely matches the acoustic impedance of the medium being probed, such as skin or wood. Controlling the acoustic impedance of polymers can be achieved by selecting materials with appropriate densities and stiffnesses as well as adding ceramic nanoparticles. This study follows a statistical methodology to examine the impact of polymer type and nanoparticle addition on the fabrication of acoustic sensors with desired acoustic impedances in the range of 1-2.2 MRayls. The proposed method using a design of experiments approach measures sensors with diaphragms of varying impedances when excited with acoustic vibrations traveling through wood, gelatin, and plastic. The sensor diaphragm is subsequently optimized for body sound monitoring, and the sensor's improved body sound coherence and airborne noise rejection are evaluated on an acoustic phantom in simulated noise environments and compared to electronic stethoscopes with onboard noise cancellation. The impedance-matched sensor demonstrates high sensitivity to body sounds, low sensitivity to airborne sound, a frequency response comparable to two state-of-the-art electronic stethoscopes, and the ability to capture lung and heart sounds from a real subject. Due to its small size, use of flexible materials, and rejection of airborne noise, the sensor provides an improved solution for wearable body sound monitoring, as well as sensing from other mediums with acoustic impedances in the range of 1-2.2 MRayls, such as water and wood.
Collapse
Affiliation(s)
- Valerie Rennoll
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, United States
| | - Ian McLane
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, United States
| | - Adebayo Eisape
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, United States
| | - Drew Grant
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, United States
| | - Helena Hahn
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, United States
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, United States
| | - James E West
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, United States
| |
Collapse
|
7
|
Kala A, McCollum ED, Elhilali M. Reference free auscultation quality metric and its trends. Biomed Signal Process Control 2023; 85:104852. [PMID: 38274002 PMCID: PMC10809975 DOI: 10.1016/j.bspc.2023.104852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Stethoscopes are used ubiquitously in clinical settings to 'listen' to lung sounds. The use of these systems in a variety of healthcare environments (hospitals, urgent care rooms, private offices, community sites, mobile clinics, etc.) presents a range of challenges in terms of ambient noise and distortions that mask lung signals from being heard clearly or processed accurately using auscultation devices. With advances in technology, computerized techniques have been developed to automate analysis or access a digital rendering of lung sounds. However, most approaches are developed and tested in controlled environments and do not reflect real-world conditions where auscultation signals are typically acquired. Without a priori access to a recording of the ambient noise (for signal-to-noise estimation) or a reference signal that reflects the true undistorted lung sound, it is difficult to evaluate the quality of the lung signal and its potential clinical interpretability. The current study proposes an objective reference-free Auscultation Quality Metric (AQM) which incorporates low-level signal attributes with high-level representational embeddings mapped to a nonlinear quality space to provide an independent evaluation of the auscultation quality. This metric is carefully designed to solely judge the signal based on its integrity relative to external distortions and masking effects and not confuse an adventitious breathing pattern as low-quality auscultation. The current study explores the robustness of the proposed AQM method across multiple clinical categorizations and different distortion types. It also evaluates the temporal sensitivity of this approach and its translational impact for deployment in digital auscultation devices.
Collapse
Affiliation(s)
- Annapurna Kala
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Eric D. McCollum
- Global Program of Pediatric Respiratory Sciences, Eudowood Division of Pediatric Respiratory Sciences, Department of Pediatrics, Johns Hopkins School of Medicine, Baltimore, USA
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
8
|
Kala A, Elhilali M. Constrained Synthetic Sampling for Augmentation of Crackle Lung Sounds. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083624 PMCID: PMC10823588 DOI: 10.1109/embc40787.2023.10340579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Crackles are explosive breathing patterns caused by lung air sacs filling with fluid and act as an indicator for a plethora of pulmonary diseases. Clinical studies suggest a strong correlation between the presence of these adventitious auscultations and mortality rate, especially in pediatric patients, underscoring the importance of their pathological indication. While clinically important, crackles occur rarely in breathing signals relative to other phases and abnormalities of lung sounds, imposing a considerable class imbalance in developing learning methodologies for automated tracking and diagnosis of lung pathologies. The scarcity and clinical relevance of crackle sounds compel a need for exploring data augmentation techniques to enrich the space of crackle signals. Given their unique nature, the current study proposes a crackle-specific constrained synthetic sampling (CSS) augmentation that captures the geometric properties of crackles across different projected object spaces. We also outline a task-agnostic validation methodology that evaluates different augmentation techniques based on their goodness of fit relative to the space of original crackles. This evaluation considers both the separability of the manifold space generated by augmented data samples as well as a statistical distance space of the synthesized data relative to the original. Compared to a range of augmentation techniques, the proposed constrained-synthetic sampling of crackle sounds is shown to generate the most analogous samples relative to original crackle sounds, highlighting the importance of carefully considering the statistical constraints of the class under study.
Collapse
|
9
|
Kraman SS, Pasterkamp H, Wodicka GR. Smart Devices Are Poised to Revolutionize the Usefulness of Respiratory Sounds. Chest 2023; 163:1519-1528. [PMID: 36706908 PMCID: PMC10925548 DOI: 10.1016/j.chest.2023.01.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/10/2023] [Accepted: 01/17/2023] [Indexed: 01/26/2023] Open
Abstract
The association between breathing sounds and respiratory health or disease has been exceptionally useful in the practice of medicine since the advent of the stethoscope. Remote patient monitoring technology and artificial intelligence offer the potential to develop practical means of assessing respiratory function or dysfunction through continuous assessment of breathing sounds when patients are at home, at work, or even asleep. Automated reports such as cough counts or the percentage of the breathing cycles containing wheezes can be delivered to a practitioner via secure electronic means or returned to the clinical office at the first opportunity. This has not previously been possible. The four respiratory sounds that most lend themselves to this technology are wheezes, to detect breakthrough asthma at night and even occupational asthma when a patient is at work; snoring as an indicator of OSA or adequacy of CPAP settings; cough in which long-term recording can objectively assess treatment adequacy; and crackles, which, although subtle and often overlooked, can contain important clinical information when appearing in a home recording. In recent years, a flurry of publications in the engineering literature described construction, usage, and testing outcomes of such devices. Little of this has appeared in the medical literature. The potential value of this technology for pulmonary medicine is compelling. We expect that these tiny, smart devices soon will allow us to address clinical questions that occur away from the clinic.
Collapse
Affiliation(s)
- Steve S Kraman
- Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, University of Kentucky, Lexington, KY.
| | - Hans Pasterkamp
- University of Manitoba, Department of Pediatrics and Child Health, Max Rady College of Medicine, University of Manitoba, Winnipeg, MB, Canada
| | - George R Wodicka
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN
| |
Collapse
|
10
|
Yang C, Dai N, Wang Z, Cai S, Wang J, Hu N. Cardiopulmonary auscultation enhancement with a two-stage noise cancellation approach. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
11
|
Grant D, McLane I, Rennoll V, West J. Considerations and Challenges for Real-World Deployment of an Acoustic-Based COVID-19 Screening System. SENSORS (BASEL, SWITZERLAND) 2022; 22:9530. [PMID: 36502232 PMCID: PMC9739601 DOI: 10.3390/s22239530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 11/30/2022] [Accepted: 12/03/2022] [Indexed: 06/17/2023]
Abstract
Coronavirus disease 2019 (COVID-19) has led to countless deaths and widespread global disruptions. Acoustic-based artificial intelligence (AI) tools could provide a simple, scalable, and prompt method to screen for COVID-19 using easily acquirable physiological sounds. These systems have been demonstrated previously and have shown promise but lack robust analysis of their deployment in real-world settings when faced with diverse recording equipment, noise environments, and test subjects. The primary aim of this work is to begin to understand the impacts of these real-world deployment challenges on the system performance. Using Mel-Frequency Cepstral Coefficients (MFCC) and RelAtive SpecTrAl-Perceptual Linear Prediction (RASTA-PLP) features extracted from cough, speech, and breathing sounds in a crowdsourced dataset, we present a baseline classification system that obtains an average receiver operating characteristic area under the curve (AUC-ROC) of 0.77 when discriminating between COVID-19 and non-COVID subjects. The classifier performance is then evaluated on four additional datasets, resulting in performance variations between 0.64 and 0.87 AUC-ROC, depending on the sound type. By analyzing subsets of the available recordings, it is noted that the system performance degrades with certain recording devices, noise contamination, and with symptom status. Furthermore, performance degrades when a uniform classification threshold from the training data is subsequently used across all datasets. However, the system performance is robust to confounding factors, such as gender, age group, and the presence of other respiratory conditions. Finally, when analyzing multiple speech recordings from the same subjects, the system achieves promising performance with an AUC-ROC of 0.78, though the classification does appear to be impacted by natural speech variations. Overall, the proposed system, and by extension other acoustic-based diagnostic aids in the literature, could provide comparable accuracy to rapid antigen testing but significant deployment challenges need to be understood and addressed prior to clinical use.
Collapse
|
12
|
Rennoll V, McLane I, Elhilali M, West JE. Optimized Acoustic Phantom Design for Characterizing Body Sound Sensors. SENSORS (BASEL, SWITZERLAND) 2022; 22:9086. [PMID: 36501787 PMCID: PMC9735779 DOI: 10.3390/s22239086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 11/19/2022] [Accepted: 11/19/2022] [Indexed: 06/17/2023]
Abstract
Many commercial and prototype devices are available for capturing body sounds that provide important information on the health of the lungs and heart; however, a standardized method to characterize and compare these devices is not agreed upon. Acoustic phantoms are commonly used because they generate repeatable sounds that couple to devices using a material layer that mimics the characteristics of skin. While multiple acoustic phantoms have been presented in literature, it is unclear how design elements, such as the driver type and coupling layer, impact the acoustical characteristics of the phantom and, therefore, the device being measured. Here, a design of experiments approach is used to compare the frequency responses of various phantom constructions. An acoustic phantom that uses a loudspeaker to generate sound and excite a gelatin layer supported by a grid is determined to have a flatter and more uniform frequency response than other possible designs with a sound exciter and plate support. When measured on an optimal acoustic phantom, three devices are shown to have more consistent measurements with added weight and differing positions compared to a non-optimal phantom. Overall, the statistical models developed here provide greater insight into acoustic phantom design for improved device characterization.
Collapse
|
13
|
Azam FB, Ansari MI, Nuhash SISK, McLane I, Hasan T. Cardiac anomaly detection considering an additive noise and convolutional distortion model of heart sound recordings. Artif Intell Med 2022; 133:102417. [PMID: 36328670 DOI: 10.1016/j.artmed.2022.102417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 09/17/2022] [Accepted: 10/02/2022] [Indexed: 12/13/2022]
Abstract
Cardiac auscultation is an essential point-of-care method used for the early diagnosis of heart diseases. Automatic analysis of heart sounds for abnormality detection is faced with the challenges of additive noise and sensor-dependent degradation. This paper aims to develop methods to address the cardiac abnormality detection problem when both of these components are present in the cardiac auscultation sound. We first mathematically analyze the effect of additive noise and convolutional distortion on short-term mel-filterbank energy-based features and a Convolutional Neural Network (CNN) layer. Based on the analysis, we propose a combination of linear and logarithmic spectrogram-image features. These 2D features are provided as input to a residual CNN network (ResNet) for heart sound abnormality detection. Experimental validation is performed first on an open-access, multiclass heart sound dataset where we analyzed the effect of additive noise by mixing lung sound noise with the recordings. In noisy conditions, the proposed method outperforms one of the best-performing methods in the literature achieving an Macc (mean of sensitivity and specificity) of 89.55% and an average F-1 score of 82.96%, respectively, when averaged over all noise levels. Next, we perform heart sound abnormality detection (binary classification) experiments on the 2016 Physionet/CinC Challenge dataset that involves noisy recordings obtained from multiple stethoscope sensors. The proposed method achieves significantly improved results compared to the conventional approaches on this dataset, in the presence of both additive noise and channel distortion, with an area under the ROC (receiver operating characteristics) curve (AUC) of 91.36%, F-1 score of 84.09%, and Macc of 85.08%. We also show that the proposed method shows the best mean accuracy across different source domains, including stethoscope and noise variability, demonstrating its effectiveness in different recording conditions. The proposed combination of linear and logarithmic features along with the ResNet classifier effectively minimizes the impact of background noise and sensor variability for classifying phonocardiogram (PCG) signals. The method thus paves the way toward developing computer-aided cardiac auscultation systems in noisy environments using low-cost stethoscopes.
Collapse
Affiliation(s)
- Farhat Binte Azam
- mHealth Lab, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, 1205, Bangladesh
| | - Md Istiaq Ansari
- mHealth Lab, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, 1205, Bangladesh
| | - Shoyad Ibn Sabur Khan Nuhash
- mHealth Lab, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, 1205, Bangladesh
| | - Ian McLane
- Sonavi Labs Inc., Baltimore, 21230, MD, USA
| | - Taufiq Hasan
- mHealth Lab, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, 1205, Bangladesh.
| |
Collapse
|
14
|
Wu YC, Han CC, Chang CS, Chang FL, Chen SF, Shieh TY, Chen HM, Lin JY. Development of an Electronic Stethoscope and a Classification Algorithm for Cardiopulmonary Sounds. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22114263. [PMID: 35684884 PMCID: PMC9185316 DOI: 10.3390/s22114263] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 05/30/2022] [Accepted: 06/01/2022] [Indexed: 05/27/2023]
Abstract
With conventional stethoscopes, the auscultation results may vary from one doctor to another due to a decline in his/her hearing ability with age or his/her different professional training, and the problematic cardiopulmonary sound cannot be recorded for analysis. In this paper, to resolve the above-mentioned issues, an electronic stethoscope was developed consisting of a traditional stethoscope with a condenser microphone embedded in the head to collect cardiopulmonary sounds and an AI-based classifier for cardiopulmonary sounds was proposed. Different deployments of the microphone in the stethoscope head with amplification and filter circuits were explored and analyzed using fast Fourier transform (FFT) to evaluate the effects of noise reduction. After testing, the microphone placed in the stethoscope head surrounded by cork is found to have better noise reduction. For classifying normal (healthy) and abnormal (pathological) cardiopulmonary sounds, each sample of cardiopulmonary sound is first segmented into several small frames and then a principal component analysis is performed on each small frame. The difference signal is obtained by subtracting PCA from the original signal. MFCC (Mel-frequency cepstral coefficients) and statistics are used for feature extraction based on the difference signal, and ensemble learning is used as the classifier. The final results are determined by voting based on the classification results of each small frame. After the testing, two distinct classifiers, one for heart sounds and one for lung sounds, are proposed. The best voting for heart sounds falls at 5-45% and the best voting for lung sounds falls at 5-65%. The best accuracy of 86.9%, sensitivity of 81.9%, specificity of 91.8%, and F1 score of 86.1% are obtained for heart sounds using 2 s frame segmentation with a 20% overlap, whereas the best accuracy of 73.3%, sensitivity of 66.7%, specificity of 80%, and F1 score of 71.5% are yielded for lung sounds using 5 s frame segmentation with a 50% overlap.
Collapse
Affiliation(s)
- Yu-Chi Wu
- Department of Electrical Engineering, National United University, Miaoli City 36003, Taiwan; (F.-L.C.); (S.-F.C.); (J.-Y.L.)
| | - Chin-Chuan Han
- Department of Computer Science and Information Engineering, National United University, Miaoli City 36003, Taiwan;
| | - Chao-Shu Chang
- Department of Information Management, National United University, Miaoli City 36003, Taiwan;
| | - Fu-Lin Chang
- Department of Electrical Engineering, National United University, Miaoli City 36003, Taiwan; (F.-L.C.); (S.-F.C.); (J.-Y.L.)
| | - Shi-Feng Chen
- Department of Electrical Engineering, National United University, Miaoli City 36003, Taiwan; (F.-L.C.); (S.-F.C.); (J.-Y.L.)
| | - Tsu-Yi Shieh
- Section of Clinical Training, Department of Medical Education, Taichung Veterans General Hospital, Taichung City 40705, Taiwan;
- Division of Allergy, Immunology and Rheumatology, Taichung Veterans General Hospital, Taichung City 40705, Taiwan
| | - Hsian-Min Chen
- Center for Quantitative Imaging in Medicine (CQUIM), Department of Medical Research, Taichung Veterans General Hospital, Taichung City 40705, Taiwan;
| | - Jin-Yuan Lin
- Department of Electrical Engineering, National United University, Miaoli City 36003, Taiwan; (F.-L.C.); (S.-F.C.); (J.-Y.L.)
| |
Collapse
|
15
|
Ahmed S, Sultana S, Khan AM, Islam MS, Habib GMM, McLane IM, McCollum ED, Baqui AH, Cunningham S, Nair H. Digital auscultation as a diagnostic aid to detect childhood pneumonia: A systematic review. J Glob Health 2022; 12:04033. [PMID: 35493777 PMCID: PMC9024283 DOI: 10.7189/jogh.12.04033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Background Frontline health care workers use World Health Organization Integrated Management of Childhood Illnesses (IMCI) guidelines for child pneumonia care in low-resource settings. IMCI guideline pneumonia diagnostic criterion performs with low specificity, resulting in antibiotic overtreatment. Digital auscultation with automated lung sound analysis may improve the diagnostic performance of IMCI pneumonia guidelines. This systematic review aims to summarize the evidence on detecting adventitious lung sounds by digital auscultation with automated analysis compared to reference physician acoustic analysis for child pneumonia diagnosis. Methods In this review, articles were searched from MEDLINE, Embase, CINAHL Plus, Web of Science, Global Health, IEEExplore database, Scopus, and the ClinicalTrial.gov databases from the inception of each database to October 27, 2021, and reference lists of selected studies and relevant review articles were searched manually. Studies reporting diagnostic performance of digital auscultation and/or computerized lung sound analysis compared against physicians’ acoustic analysis for pneumonia diagnosis in children under the age of 5 were eligible for this systematic review. Retrieved citations were screened and eligible studies were included for extraction. Risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. All these steps were independently performed by two authors and disagreements between the reviewers were resolved through discussion with an arbiter. Narrative data synthesis was performed. Results A total of 3801 citations were screened and 46 full-text articles were assessed. 10 studies met the inclusion criteria. Half of the studies used a publicly available respiratory sound database to evaluate their proposed work. Reported methodologies/approaches and performance metrics for classifying adventitious lung sounds varied widely across the included studies. All included studies except one reported overall diagnostic performance of the digital auscultation/computerised sound analysis to distinguish adventitious lung sounds, irrespective of the disease condition or age of the participants. The reported accuracies for classifying adventitious lung sounds in the included studies varied from 66.3% to 100%. However, it remained unclear to what extent these results would be applicable for classifying adventitious lung sounds in children with pneumonia. Conclusions This systematic review found very limited evidence on the diagnostic performance of digital auscultation to diagnose pneumonia in children. Well-designed studies and robust reporting are required to evaluate the accuracy of digital auscultation in the paediatric population.
Collapse
Affiliation(s)
- Salahuddin Ahmed
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Projahnmo Research Foundation, Dhaka, Bangladesh
| | | | - Ahad M Khan
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Projahnmo Research Foundation, Dhaka, Bangladesh
| | - Mohammad S Islam
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Child Health Research Foundation, Dhaka, Bangladesh
| | - GM Monsur Habib
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Bangladesh Primary Care Respiratory Society, Khulna, Bangladesh
| | | | - Eric D McCollum
- Global Program for Pediatric Respiratory Sciences, Eudowood Division of Paediatric Respiratory Sciences, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
- Department of International Health, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, USA
| | - Abdullah H Baqui
- Department of International Health, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, USA
| | - Steven Cunningham
- Department of Child Life and Health, Centre for Inflammation Research, University of Edinburgh, Edinburgh, UK
| | - Harish Nair
- Usher Institute, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
16
|
Ahmed S, Mitra DK, Nair H, Cunningham S, Khan AM, Islam AA, McLane IM, Chowdhury NH, Begum N, Shahidullah M, Islam MS, Norrie J, Campbell H, Sheikh A, Baqui AH, McCollum ED. Digital auscultation as a novel childhood pneumonia diagnostic tool for community clinics in Sylhet, Bangladesh: protocol for a cross-sectional study. BMJ Open 2022; 12:e059630. [PMID: 35140164 PMCID: PMC8830242 DOI: 10.1136/bmjopen-2021-059630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
INTRODUCTION The WHO's Integrated Management of Childhood Illnesses (IMCI) algorithm for diagnosis of child pneumonia relies on counting respiratory rate and observing respiratory distress to diagnose childhood pneumonia. IMCI case defination for pneumonia performs with high sensitivity but low specificity, leading to overdiagnosis of child pneumonia and unnecessary antibiotic use. Including lung auscultation in IMCI could improve specificity of pneumonia diagnosis. Our objectives are: (1) assess lung sound recording quality by primary healthcare workers (HCWs) from under-5 children with the Feelix Smart Stethoscope and (2) determine the reliability and performance of recorded lung sound interpretations by an automated algorithm compared with reference paediatrician interpretations. METHODS AND ANALYSIS In a cross-sectional design, community HCWs will record lung sounds of ~1000 under-5-year-old children with suspected pneumonia at first-level facilities in Zakiganj subdistrict, Sylhet, Bangladesh. Enrolled children will be evaluated for pneumonia, including oxygen saturation, and have their lung sounds recorded by the Feelix Smart stethoscope at four sequential chest locations: two back and two front positions. A novel sound-filtering algorithm will be applied to recordings to address ambient noise and optimise recording quality. Recorded sounds will be assessed against a predefined quality threshold. A trained paediatric listening panel will classify recordings into one of the following categories: normal, crackles, wheeze, crackles and wheeze or uninterpretable. All sound files will be classified into the same categories by the automated algorithm and compared with panel classifications. Sensitivity, specificity and predictive values, of the automated algorithm will be assessed considering the panel's final interpretation as gold standard. ETHICS AND DISSEMINATION The study protocol was approved by the National Research Ethics Committee of Bangladesh Medical Research Council, Bangladesh (registration number: 09630012018) and Academic and Clinical Central Office for Research and Development Medical Research Ethics Committee, Edinburgh, UK (REC Reference: 18-HV-051). Dissemination will be through conference presentations, peer-reviewed journals and stakeholder engagement meetings in Bangladesh. TRIAL REGISTRATION NUMBER NCT03959956.
Collapse
Affiliation(s)
- Salahuddin Ahmed
- Projahnmo Research Foundation, Dhaka, Bangladesh
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | - Dipak Kumar Mitra
- Projahnmo Research Foundation, Dhaka, Bangladesh
- Public Health, North South University, Dhaka, Bangladesh
| | - Harish Nair
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | - Steven Cunningham
- Department of Child Life and Health, Royal Hospital for Sick Children, Edinburgh, UK
| | - Ahad Mahmud Khan
- Projahnmo Research Foundation, Dhaka, Bangladesh
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | | | | | | | - Nazma Begum
- Projahnmo Research Foundation, Dhaka, Bangladesh
| | - Mohammod Shahidullah
- Department of Neonatology, Bangabandhu Sheikh Mujib Medical University, Dhaka, Bangladesh
| | - Muhammad Shariful Islam
- Directorate General of Health Services, Ministry of Health and Family Welfare, Government of Bangladesh, Dhaka, Bangladesh
| | - John Norrie
- Usher Institute, Edinburgh Clinical Trials Unit, University of Edinburgh No. 9, Bioquarter, Edinburgh, UK
| | - Harry Campbell
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | - Aziz Sheikh
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | - Abdullah H Baqui
- Department of International Health, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland, USA
| | - Eric D McCollum
- Department of International Health, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland, USA
- Global Program in Pediatric Respiratory Sciences, Eudowood Division of Pediatric Respiratory Sciences, Department of Pediatrics, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
17
|
Sanchez-Perez JA, Berkebile JA, Nevius BN, Ozmen GC, Nichols CJ, Ganti VG, Mabrouk SA, Clifford GD, Kamaleswaran R, Wright DW, Inan OT. A Wearable Multimodal Sensing System for Tracking Changes in Pulmonary Fluid Status, Lung Sounds, and Respiratory Markers. SENSORS 2022; 22:s22031130. [PMID: 35161876 PMCID: PMC8838360 DOI: 10.3390/s22031130] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 01/23/2022] [Accepted: 01/29/2022] [Indexed: 12/17/2022]
Abstract
Heart failure (HF) exacerbations, characterized by pulmonary congestion and breathlessness, require frequent hospitalizations, often resulting in poor outcomes. Current methods for tracking lung fluid and respiratory distress are unable to produce continuous, holistic measures of cardiopulmonary health. We present a multimodal sensing system that captures bioimpedance spectroscopy (BIS), multi-channel lung sounds from four contact microphones, multi-frequency impedance pneumography (IP), temperature, and kinematics to track changes in cardiopulmonary status. We first validated the system on healthy subjects (n = 10) and then conducted a feasibility study on patients (n = 14) with HF in clinical settings. Three measurements were taken throughout the course of hospitalization, and parameters relevant to lung fluid status—the ratio of the resistances at 5 kHz to those at 150 kHz (K)—and respiratory timings (e.g., respiratory rate) were extracted. We found a statistically significant increase in K (p < 0.05) from admission to discharge and observed respiratory timings in physiologically plausible ranges. The IP-derived respiratory signals and lung sounds were sensitive enough to detect abnormal respiratory patterns (Cheyne–Stokes) and inspiratory crackles from patient recordings, respectively. We demonstrated that the proposed system is suitable for detecting changes in pulmonary fluid status and capturing high-quality respiratory signals and lung sounds in a clinical setting.
Collapse
Affiliation(s)
- Jesus Antonio Sanchez-Perez
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA; (J.A.B.); (G.C.O.); (S.A.M.); (O.T.I.)
- Correspondence:
| | - John A. Berkebile
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA; (J.A.B.); (G.C.O.); (S.A.M.); (O.T.I.)
| | - Brandi N. Nevius
- Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA;
| | - Goktug C. Ozmen
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA; (J.A.B.); (G.C.O.); (S.A.M.); (O.T.I.)
| | - Christopher J. Nichols
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA 30332, USA; (C.J.N.); (G.D.C.); (R.K.)
| | - Venu G. Ganti
- Bioengineering Graduate Program, Georgia Institute of Technology, Atlanta, GA 30332, USA;
| | - Samer A. Mabrouk
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA; (J.A.B.); (G.C.O.); (S.A.M.); (O.T.I.)
| | - Gari D. Clifford
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA 30332, USA; (C.J.N.); (G.D.C.); (R.K.)
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30332, USA
| | - Rishikesan Kamaleswaran
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA 30332, USA; (C.J.N.); (G.D.C.); (R.K.)
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30332, USA
- Department of Emergency Medicine, Emory University, Atlanta, GA 30332, USA;
| | - David W. Wright
- Department of Emergency Medicine, Emory University, Atlanta, GA 30332, USA;
| | - Omer T. Inan
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA; (J.A.B.); (G.C.O.); (S.A.M.); (O.T.I.)
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA 30332, USA; (C.J.N.); (G.D.C.); (R.K.)
| |
Collapse
|
18
|
McLane I, Lauwers E, Stas T, Busch-Vishniac I, Ides K, Verhulst S, Steckel J. Comprehensive Analysis System for Automated Respiratory Cycle Segmentation and Crackle Peak Detection. IEEE J Biomed Health Inform 2021; 26:1847-1860. [PMID: 34705660 DOI: 10.1109/jbhi.2021.3123353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Digital auscultation is a well-known method for assessing lung sounds, but remains a subjective process in typical practice, relying on the human interpretation. Several methods have been presented for detecting or analyzing crackles but are limited in their real-world application because few have been integrated into comprehensive systems or validated on non-ideal data. This work details a complete signal analysis methodology for analyzing crackles in challenging recordings. The procedure comprises five sequential processing blocks: (1) motion artifact detection, (2) deep learning denoising network, (3) respiratory cycle segmentation, (4) separation of discontinuous adventitious sounds from vesicular sounds, and (5) crackle peak detection. This system uses a collection of new methods and robustness-focused improvements on previous methods to analyze respiratory cycles and crackles therein. To validate the accuracy, the system is tested on a database of 1000 simulated lung sounds with varying levels of motion artifacts, ambient noise, cycle lengths and crackle intensities, in which ground truths are exactly known. The system performs with average F-score of 91.07% for detecting motion artifacts and 94.43% for respiratory cycle extraction, and an overall F-score of 94.08% for detecting the locations of individual crackles. The process also successfully detects healthy recordings. Preliminary validation is also presented on a small set of 20 patient recordings, for which the system performs comparably. These methods provide quantifiable analysis of respiratory sounds to enable clinicians to distinguish between types of crackles, their timing within the respiratory cycle, and the level of occurrence. Crackles are one of the most common abnormal lung sounds, presenting in multiple cardiorespiratory diseases. These features will contribute to a better understanding of disease severity and progression in an objective, simple and non-invasive way.
Collapse
|