1
|
Curtis JA, Borders JC, Dakin AE, Troche MS. Auditory-Perceptual Assessments of Cough: Characterizing Rater Reliability and the Effects of a Standardized Training Protocol. Folia Phoniatr Logop 2023; 76:77-90. [PMID: 37544291 DOI: 10.1159/000533372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 07/24/2023] [Indexed: 08/08/2023] Open
Abstract
INTRODUCTION Auditory-perceptual assessments of cough are commonly used by speech-language pathologists working with people with swallowing disorders with emerging evidence beginning to demonstrate their validity; however, their reliability among novice clinicians is unknown. Therefore, the primary aim of this study was to characterize the reliability of auditory-perceptual assessments of cough among a group of novice clinicians. As a secondary aim, we assessed the effects of a standardized training protocol on the reliability of auditory-perceptual assessments of cough. METHODS Twelve novice clinicians blindly rated ten auditory-perceptual cough descriptors for 120 cough audio clips. Standardized training was then completed by the group of clinicians. The same cough audio clips were then re-randomized and blindly rated. Reliability was analyzed pre- and post-training within each clinician (intra-rater), between each unique pair of raters (dyad-level inter-rater), and for the entire group of raters (group-level inter-rater) using intraclass correlation coefficients and Cohen's Kappa. RESULTS Pre-training reliability was greatest for measures of strength, effectiveness, and normality and lowest when judging the type of expiratory maneuver (cough, throat clear, huff, other). The measures that improved the most with training were ratings of perceived crispness, amount of voicing, and type of expiratory maneuver. Intra-rater reliability coefficients ranged from 0.580 to 0.903 pre-training and 0.756-0.904 post-training. Dyad-level inter-rater reliability coefficients ranged from 0.295 to 0.745 pre-training and 0.450-0.804 post-training. Group-level inter-rater reliability coefficients ranged from 0.454 to 0.919 pre-training and 0.558-0.948 post-training. CONCLUSION Reliability of auditory-perceptual assessments varied across perceptual cough descriptors, but all appeared within the range of what has been historically reported for auditory-perceptual assessments of voice and visual-perceptual assessments of swallowing and cough airflow. Reliability improved for most cough descriptors following 30-60 min of standardized training. Future research is needed to examine the validity of auditory-perceptual assessments of cough by assessing the relationship between perceptual cough descriptors and instrumental measures of cough effectiveness to better understand the role of perceptual assessments in clinical practice.
Collapse
Affiliation(s)
- James A Curtis
- Department of Otolaryngology-Head and Neck Surgery, Aerodigestive Innovations Research Lab (AIR), Weill Cornell Medical College, New York, New York, USA
- Department of Biobehavioral Sciences, Laboratory for the Study of Upper Airway Dysfunction, Teachers College, Columbia University, New York, New York, USA
| | - James C Borders
- Department of Biobehavioral Sciences, Laboratory for the Study of Upper Airway Dysfunction, Teachers College, Columbia University, New York, New York, USA
| | - Avery E Dakin
- Department of Biobehavioral Sciences, Laboratory for the Study of Upper Airway Dysfunction, Teachers College, Columbia University, New York, New York, USA
| | - Michelle S Troche
- Department of Biobehavioral Sciences, Laboratory for the Study of Upper Airway Dysfunction, Teachers College, Columbia University, New York, New York, USA
| |
Collapse
|
2
|
A Progressively Expanded Database for Automated Lung Sound Analysis: An Update. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
We previously established an open-access lung sound database, HF_Lung_V1, and developed deep learning models for inhalation, exhalation, continuous adventitious sound (CAS), and discontinuous adventitious sound (DAS) detection. The amount of data used for training contributes to model accuracy. In this study, we collected larger quantities of data to further improve model performance and explored issues of noisy labels and overlapping sounds. HF_Lung_V1 was expanded to HF_Lung_V2 with a 1.43× increase in the number of audio files. Convolutional neural network–bidirectional gated recurrent unit network models were trained separately using the HF_Lung_V1 (V1_Train) and HF_Lung_V2 (V2_Train) training sets. These were tested using the HF_Lung_V1 (V1_Test) and HF_Lung_V2 (V2_Test) test sets, respectively. Segment and event detection performance was evaluated. Label quality was assessed. Overlap ratios were computed between inhalation, exhalation, CAS, and DAS labels. The model trained using V2_Train exhibited improved performance in inhalation, exhalation, CAS, and DAS detection on both V1_Test and V2_Test. Poor CAS detection was attributed to the quality of CAS labels. DAS detection was strongly influenced by the overlapping of DAS with inhalation and exhalation. In conclusion, collecting greater quantities of lung sound data is vital for developing more accurate lung sound analysis models.
Collapse
|
4
|
Fu J, Teng WN, Li W, Chiou YW, Huang D, Liu J, Ting CK, Tsou MY, Yu L. Estimation of Respiratory Nasal Pressure and Flow Rate Signals Using Different Respiratory Sound Features. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2021.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
5
|
Hsu FS, Huang SR, Huang CW, Huang CJ, Cheng YR, Chen CC, Hsiao J, Chen CW, Chen LC, Lai YC, Hsu BF, Lin NJ, Tsai WL, Wu YL, Tseng TL, Tseng CT, Chen YT, Lai F. Benchmarking of eight recurrent neural network variants for breath phase and adventitious sound detection on a self-developed open-access lung sound database-HF_Lung_V1. PLoS One 2021; 16:e0254134. [PMID: 34197556 PMCID: PMC8248710 DOI: 10.1371/journal.pone.0254134] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/20/2021] [Indexed: 01/15/2023] Open
Abstract
A reliable, remote, and continuous real-time respiratory sound monitor with automated respiratory sound analysis ability is urgently required in many clinical scenarios-such as in monitoring disease progression of coronavirus disease 2019-to replace conventional auscultation with a handheld stethoscope. However, a robust computerized respiratory sound analysis algorithm for breath phase detection and adventitious sound detection at the recording level has not yet been validated in practical applications. In this study, we developed a lung sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds (duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels, 13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze labels, 686 stridor labels, and 4,740 rhonchus labels), and 15,606 discontinuous adventitious sound labels (all crackles). We conducted benchmark tests using long short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM (BiLSTM), bidirectional GRU (BiGRU), convolutional neural network (CNN)-LSTM, CNN-GRU, CNN-BiLSTM, and CNN-BiGRU models for breath phase detection and adventitious sound detection. We also conducted a performance comparison between the LSTM-based and GRU-based models, between unidirectional and bidirectional models, and between models with and without a CNN. The results revealed that these models exhibited adequate performance in lung sound analysis. The GRU-based models outperformed, in terms of F1 scores and areas under the receiver operating characteristic curves, the LSTM-based models in most of the defined tasks. Furthermore, all bidirectional models outperformed their unidirectional counterparts. Finally, the addition of a CNN improved the accuracy of lung sound analysis, especially in the CAS detection tasks.
Collapse
Affiliation(s)
- Fu-Shun Hsu
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
- Department of Critical Care Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | | | | | - Chao-Jung Huang
- Joint Research Center for Artificial Intelligence Technology and All Vista Healthcare, National Taiwan University, Taipei, Taiwan
| | - Yuan-Ren Cheng
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
- Department of Life Science, College of Life Science, National Taiwan University, Taipei, Taiwan
- Institute of Biomedical Sciences, Academia Sinica, Taipei, Taiwan
| | | | - Jack Hsiao
- HCC Healthcare Group, New Taipei, Taiwan
| | - Chung-Wei Chen
- Department of Critical Care Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Li-Chin Chen
- Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
| | - Yen-Chun Lai
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Bi-Fang Hsu
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Nian-Jhen Lin
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
- Division of Pulmonary Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Wan-Ling Tsai
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Yi-Lin Wu
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | | | | | - Yi-Tsun Chen
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
6
|
Lu X, Azevedo Coste C, Nierat MC, Renaux S, Similowski T, Guiraud D. Respiratory Monitoring Based on Tracheal Sounds: Continuous Time-Frequency Processing of the Phonospirogram Combined with Phonocardiogram-Derived Respiration. SENSORS 2020; 21:s21010099. [PMID: 33375762 PMCID: PMC7795986 DOI: 10.3390/s21010099] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Revised: 12/20/2020] [Accepted: 12/21/2020] [Indexed: 11/27/2022]
Abstract
Patients with central respiratory paralysis can benefit from diaphragm pacing to restore respiratory function. However, it would be important to develop a continuous respiratory monitoring method to alert on apnea occurrence, in order to improve the efficiency and safety of the pacing system. In this study, we present a preliminary validation of an acoustic apnea detection method on healthy subjects data. Thirteen healthy participants performed one session of two 2-min recordings, including a voluntary respiratory pause. The recordings were post-processed by combining temporal and frequency detection domains, and a new method was proposed—Phonocardiogram-Derived Respiration (PDR). The detection results were compared to synchronized pneumotachograph, electrocardiogram (ECG), and abdominal strap (plethysmograph) signals. The proposed method reached an apnea detection rate of 92.3%, with 99.36% specificity, 85.27% sensitivity, and 91.49% accuracy. PDR method showed a good correlation of 0.77 with ECG-Derived Respiration (EDR). The comparison of R-R intervals and S-S intervals also indicated a good correlation of 0.89. The performance of this respiratory detection algorithm meets the minimal requirements to make it usable in a real situation. Noises from the participant by speaking or from the environment had little influence on the detection result, as well as body position. The high correlation between PDR and EDR indicates the feasibility of monitoring respiration with PDR.
Collapse
Affiliation(s)
- Xinyue Lu
- Faculté des Sciences, University of Montpellier, F-34090 Montpellier, France;
- NeuroResp, F-34600 Les Aires, France;
| | | | - Marie-Cécile Nierat
- UMRS1158 Neurophysiologie Respiratoire Expérimentale et Clinique, INSERM, Sorbonne Université, F-75005 Paris, France; (M.-C.N.); (T.S.)
| | - Serge Renaux
- NeuroResp, F-34600 Les Aires, France;
- NEURINNOV, F-34090 Montpellier, France
| | - Thomas Similowski
- UMRS1158 Neurophysiologie Respiratoire Expérimentale et Clinique, INSERM, Sorbonne Université, F-75005 Paris, France; (M.-C.N.); (T.S.)
- AP-HP, Site Pitié-Salpêtrière, Service de Pneumologie, Médecine Intensive et Réanimation (Département R3S), Groupe Hospitalier Universitaire APHP-Sorbonne Université, F-75013 Paris, France
| | - David Guiraud
- INRIA, F-34090 Montpellier, France;
- NEURINNOV, F-34090 Montpellier, France
| |
Collapse
|