1
|
Teplitzky TB, Zauher AJ, Isaiah A. Alternatives to Polysomnography for the Diagnosis of Pediatric Obstructive Sleep Apnea. Diagnostics (Basel) 2023; 13:diagnostics13111956. [PMID: 37296808 DOI: 10.3390/diagnostics13111956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/16/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023] Open
Abstract
Diagnosis of obstructive sleep apnea (OSA) in children with sleep-disordered breathing (SDB) requires hospital-based, overnight level I polysomnography (PSG). Obtaining a level I PSG can be challenging for children and their caregivers due to the costs, barriers to access, and associated discomfort. Less burdensome methods that approximate pediatric PSG data are needed. The goal of this review is to evaluate and discuss alternatives for evaluating pediatric SDB. To date, wearable devices, single-channel recordings, and home-based PSG have not been validated as suitable replacements for PSG. However, they may play a role in risk stratification or as screening tools for pediatric OSA. Further studies are needed to determine if the combined use of these metrics could predict OSA.
Collapse
Affiliation(s)
- Taylor B Teplitzky
- Department of Otorhinolaryngology-Head and Neck Surgery, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | - Audrey J Zauher
- Department of Otorhinolaryngology-Head and Neck Surgery, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | - Amal Isaiah
- Department of Otorhinolaryngology-Head and Neck Surgery, University of Maryland School of Medicine, Baltimore, MD 21201, USA
- Department of Pediatrics, University of Maryland School of Medicine, Baltimore, MD 21201, USA
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| |
Collapse
|
2
|
Cheng X, Hu F, Yang B, Wang F, Olofsson T. Contactless sleep posture measurements for demand-controlled sleep thermal comfort: A pilot study. INDOOR AIR 2022; 32:e13175. [PMID: 36567523 DOI: 10.1111/ina.13175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 10/18/2022] [Accepted: 10/24/2022] [Indexed: 06/17/2023]
Abstract
Thermal comfort during sleep is essential for both sleep quality and human health while sleeping. There are currently few effective contactless methods for detecting the sleep thermal comfort at any time of day or night. In this paper, a vision-based detection approach for human thermal comfort while sleeping was proposed, which is intended to avoid overcooling/overheating supply, meet the thermal comfort needs of human sleep, and improve human sleep quality and health. Based on 438 valid questionnaire surveys, 10 types of thermal comfort sleep postures were summarized. By using a large number of data captured, a fundamental framework of detection algorithm was constructed to detect human sleeping postures, and corresponding weighting model was established. A total of 2.65 million frames of posture data in natural sleep status were collected, and thermal comfort-related sleep postures dataset was created. Finally, the robustness and effectiveness of the proposed algorithm were validated. The validation results show that the sleeping posture and human skeleton keypoints can be used for estimating sleeping thermal comfort, and the the quilt coverage area can be fused to improve the detection accuracy.
Collapse
Affiliation(s)
- Xiaogang Cheng
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, China
- Department of Applied Physics and Electronics, Umeå University, Umeå, Sweden
| | - Fei Hu
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Bin Yang
- Department of Applied Physics and Electronics, Umeå University, Umeå, Sweden
- School of Energy and Safety Engineering, Tianjing Chengjian University, Tianjin, China
| | - Faming Wang
- Department of Biosystems (BIOSYST), KU Leuven, Leuven, Belgium
| | - Thomas Olofsson
- Department of Applied Physics and Electronics, Umeå University, Umeå, Sweden
| |
Collapse
|
3
|
A systematic review of the validity of non-invasive sleep-measuring devices in mid-to-late life adults: Future utility for Alzheimer's disease research. Sleep Med Rev 2022; 65:101665. [DOI: 10.1016/j.smrv.2022.101665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 06/21/2022] [Accepted: 06/23/2022] [Indexed: 11/24/2022]
|
4
|
Goldstein Y, Schätz M, Avigal M. Chest area segmentation in 3D images of sleeping patients. Med Biol Eng Comput 2022; 60:2159-2172. [DOI: 10.1007/s11517-022-02577-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 04/12/2022] [Indexed: 10/18/2022]
|
5
|
Auditory Property-Based Features and Artificial Neural Network Classifiers for the Automatic Detection of Low-Intensity Snoring/Breathing Episodes. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12042242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
The definitive diagnosis of obstructive sleep apnea syndrome (OSAS) is made using an overnight polysomnography (PSG) test. This test requires that a patient wears multiple measurement sensors during an overnight hospitalization. However, this setup imposes physical constraints and a heavy burden on the patient. Recent studies have reported on another technique for conducting OSAS screening based on snoring/breathing episodes (SBEs) extracted from recorded data acquired by a noncontact microphone. However, SBEs have a high dynamic range and are barely audible at intensities >90 dB. A method is needed to detect SBEs even in low-signal-to-noise-ratio (SNR) environments. Therefore, we developed a method for the automatic detection of low-intensity SBEs using an artificial neural network (ANN). However, when considering its practical use, this method required further improvement in terms of detection accuracy and speed. To accomplish this, we propose in this study a new method to detect low SBEs based on neural activity pattern (NAP)-based cepstral coefficients (NAPCC) and ANN classifiers. Comparison results of the leave-one-out cross-validation demonstrated that our proposed method is superior to previous methods for the classification of SBEs and non-SBEs, even in low-SNR conditions (accuracy: 85.99 ± 5.69% vs. 75.64 ± 18.8%).
Collapse
|
6
|
Skovgaard EL, Pedersen J, Møller NC, Grøntved A, Brønd JC. Manual Annotation of Time in Bed Using Free-Living Recordings of Accelerometry Data. SENSORS 2021; 21:s21248442. [PMID: 34960533 PMCID: PMC8707394 DOI: 10.3390/s21248442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 12/07/2021] [Accepted: 12/14/2021] [Indexed: 12/02/2022]
Abstract
With the emergence of machine learning for the classification of sleep and other human behaviors from accelerometer data, the need for correctly annotated data is higher than ever. We present and evaluate a novel method for the manual annotation of in-bed periods in accelerometer data using the open-source software Audacity®, and we compare the method to the EEG-based sleep monitoring device Zmachine® Insight+ and self-reported sleep diaries. For evaluating the manual annotation method, we calculated the inter- and intra-rater agreement and agreement with Zmachine and sleep diaries using interclass correlation coefficients and Bland–Altman analysis. Our results showed excellent inter- and intra-rater agreement and excellent agreement with Zmachine and sleep diaries. The Bland–Altman limits of agreement were generally around ±30 min for the comparison between the manual annotation and the Zmachine timestamps for the in-bed period. Moreover, the mean bias was minuscule. We conclude that the manual annotation method presented is a viable option for annotating in-bed periods in accelerometer data, which will further qualify datasets without labeling or sleep records.
Collapse
|
7
|
Mordoh V, Zigel Y. Audio source separation to reduce sleeping partner sounds: a simulation study. Physiol Meas 2021; 42. [PMID: 34038872 DOI: 10.1088/1361-6579/ac0592] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 05/26/2021] [Indexed: 12/31/2022]
Abstract
Objective.When recording a subject in an at-home environment for sleep evaluation or for other breathing disorder diagnoses using non-contact microphones, the breathing recordings (audio signals) can be distorted by sounds such as TV, outside noise, or air-conditioners. If two people are sleeping together, both may produce breathing/snoring sounds that need to be separated. In this study, we present signal processing and source separation algorithms for the enhancement of individual breathing/snoring audio signals in a simulated environment.Approach.We developed a computer simulation of mixed signals derived from genuine nocturnal recordings of 110 subjects. Two main source separation approaches were tested: (1) changing the basis vectors for the mixtures in the time domain (principal and independent component analysis, PCA/ICA) and (2) converting the mixtures to their time-frequency representations (degenerate un-mixing estimation technique, DUET). In addition to these source separation techniques, a beamforming approach was tested.Main results.The separation results with a reverberation time of 0.15 s and zero SNR between signals showed good performance (mean source to interference ratio (SIR): DUET = 12.831 dB, ICA = 3.388 dB, PCA = 4.452 dB), and for beamforming (SIR = -0.304 dB). To evaluate our source separation results, we propose two new measures: an evaluation measure based on a spectral similarity score (mel-SID) between the target source and its estimation (after separation) and a breathing energy ratio measure (BER). The results with the new proposed measures yielded comparable conclusions (mel-SID: DUET = 1.320, ICA = 2.732, PCA = 1.927, and beamforming = 2.590, BER: DUET = 10.241 dB, ICA = 0.270 dB, PCA = -2.847 dB, and beamforming = -1.151 dB), but better differentiated the differences between the performance of the algorithms. The DUET is superior on all measures. Its main advantage is that it only uses two microphones for separation.Significance. The separated audio signal can thus contribute to a more informed diagnosis of sleep-related and non-sleep-related diseases. The Institutional Review Committee of Soroka University Medical Center approved this study protocol (protocol number 10141) and all methods were performed in accordance with the relevant guidelines and regulations.
Collapse
Affiliation(s)
- Valeria Mordoh
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Yaniv Zigel
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
8
|
Fonseca P, van Gilst MM, Radha M, Ross M, Moreau A, Cerny A, Anderer P, Long X, van Dijk JP, Overeem S. Automatic sleep staging using heart rate variability, body movements, and recurrent neural networks in a sleep disordered population. Sleep 2021; 43:5811423. [PMID: 32249911 DOI: 10.1093/sleep/zsaa048] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Revised: 03/09/2020] [Indexed: 12/14/2022] Open
Abstract
STUDY OBJECTIVES To validate a previously developed sleep staging algorithm using heart rate variability (HRV) and body movements in an independent broad cohort of unselected sleep disordered patients. METHODS We applied a previously designed algorithm for automatic sleep staging using long short-term memory recurrent neural networks to model sleep architecture. The classifier uses 132 HRV features computed from electrocardiography and activity counts from accelerometry. We retrained our algorithm using two public datasets containing both healthy sleepers and sleep disordered patients. We then tested the performance of the algorithm on an independent hold-out validation set of sleep recordings from a wide range of sleep disorders collected in a tertiary sleep medicine center. RESULTS The classifier achieved substantial agreement on four-class sleep staging (wake/N1-N2/N3/rapid eye movement [REM]), with an average κ of 0.60 and accuracy of 75.9%. The performance of the sleep staging algorithm was significantly higher in insomnia patients (κ = 0.62, accuracy = 77.3%). Only in REM parasomnias, the performance was significantly lower (κ = 0.47, accuracy = 70.5%). For two-class wake/sleep classification, the classifier achieved a κ of 0.65, with a sensitivity (to wake) of 72.9% and specificity of 94.0%. CONCLUSIONS This study shows that the combination of HRV, body movements, and a state-of-the-art deep neural network can reach substantial agreement in automatic sleep staging compared with polysomnography, even in patients suffering from a multitude of sleep disorders. The physiological signals required can be obtained in various ways, including non-obtrusive wrist-worn sensors, opening up new avenues for clinical diagnostics.
Collapse
Affiliation(s)
- Pedro Fonseca
- Philips Research, Eindhoven, The Netherlands.,Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Merel M van Gilst
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.,Sleep Medicine Centre Kempenhaeghe, Heeze, The Netherlands
| | - Mustafa Radha
- Philips Research, Eindhoven, The Netherlands.,Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Marco Ross
- Sleep and Respiratory Care, Home Healthcare Solutions, Philips Austria GmbH, Vienna, Austria
| | - Arnaud Moreau
- Sleep and Respiratory Care, Home Healthcare Solutions, Philips Austria GmbH, Vienna, Austria
| | - Andreas Cerny
- Sleep and Respiratory Care, Home Healthcare Solutions, Philips Austria GmbH, Vienna, Austria
| | - Peter Anderer
- Sleep and Respiratory Care, Home Healthcare Solutions, Philips Austria GmbH, Vienna, Austria
| | - Xi Long
- Philips Research, Eindhoven, The Netherlands.,Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Johannes P van Dijk
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.,Sleep Medicine Centre Kempenhaeghe, Heeze, The Netherlands
| | - Sebastiaan Overeem
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.,Sleep Medicine Centre Kempenhaeghe, Heeze, The Netherlands
| |
Collapse
|
9
|
Saner H, Knobel SEJ, Schuetz N, Nef T. Contact-free sensor signals as a new digital biomarker for cardiovascular disease: chances and challenges. EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2020; 1:30-39. [PMID: 36713967 PMCID: PMC9707864 DOI: 10.1093/ehjdh/ztaa006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 09/26/2020] [Accepted: 11/18/2020] [Indexed: 02/01/2023]
Abstract
Multiple sensor systems are used to monitor physiological parameters, activities of daily living and behaviour. Digital biomarkers can be extracted and used as indicators for health and disease. Signal acquisition is either by object sensors, wearable sensors, or contact-free sensors including cameras, pressure sensors, non-contact capacitively coupled electrocardiogram (cECG), radar, and passive infrared motion sensors. This review summarizes contemporary knowledge of the use of contact-free sensors for patients with cardiovascular disease and healthy subjects following the PRISMA declaration. Chances and challenges are discussed. Thirty-six publications were rated to be of medium (31) or high (5) relevance. Results are best for monitoring of heart rate and heart rate variability using cardiac vibration, facial camera, or cECG; for respiration using cardiac vibration, cECG, or camera; and for sleep using ballistocardiography. Early results from radar sensors to monitor vital signs are promising. Contact-free sensors are little invasive, well accepted and suitable for long-term monitoring in particular in patient's homes. A major problem are motion artefacts. Results from long-term use in larger patient cohorts are still lacking, but the technology is about to emerge the market and we can expect to see more clinical results in the near future.
Collapse
Affiliation(s)
- Hugo Saner
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, CH 3008 Bern, Switzerland,Department of Preventive Cardiology, University Hospital Bern, Inselspital, Freiburgstrasse 18, CH 3010 Bern, Switzerland,Corresponding author. Tel: +41 79 209 11 82,
| | - Samuel Elia Johannes Knobel
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, CH 3008 Bern, Switzerland
| | - Narayan Schuetz
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, CH 3008 Bern, Switzerland
| | - Tobias Nef
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, CH 3008 Bern, Switzerland,Department of Neurology, University Hospital Bern, Inselspital, Freiburgstrasse 18, CH 3010 Bern, Switzerland
| |
Collapse
|
10
|
Nakano H, Furukawa T, Tanigawa T. Tracheal Sound Analysis Using a Deep Neural Network to Detect Sleep Apnea. J Clin Sleep Med 2020; 15:1125-1133. [PMID: 31482834 DOI: 10.5664/jcsm.7804] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
STUDY OBJECTIVES Portable devices for home sleep apnea testing are often limited by their inability to discriminate sleep/wake status, possibly resulting in underestimations. Tracheal sound (TS), which can be visualized as a spectrogram, carries information about apnea/hypopnea and sleep/wake status. We hypothesized that image analysis of all-night TS recordings by a deep neural network (DNN) would be capable of detecting breathing events and classifying sleep/wake status. The aim of this study is to develop a DNN-based system for sleep apnea testing and validate it using a large sampling of polysomnography (PSG) data. METHODS PSG examinations for the evaluation of sleep-disordered breathing (SDB) were performed for 1,852 patients: 1,548 PSG records were used to develop the system, and the remaining 304 records were used for validation. TS spectrogram images were obtained every 60 seconds and labeled with the PSG scoring results (breathing event and sleep/wake status), then introduced to DNN learning. Two different DNNs were trained for breathing status and sleep/wake status, respectively. RESULTS A DNN with convolutional layers showed the best performance for discriminating breathing status. The same DNN structure was trained for sleep/wake discrimination. In the validation study, the DNN analysis was capable of discriminating the sleep/wake status with reasonable accuracy. The diagnostic sensitivity, specificity, and area under the receiver operating characteristic curves for diagnosis of SDB with apnea-hypopnea index of > 5, 15, and 30 were 0.98, 0.76, and 0.99; 0.97, 0.90, and 0.99; and 0.92, 0.94, and 0.98, respectively. CONCLUSIONS The developed system using the TS DNN analysis has a good performance for SDB testing. CITATION Nakano H, Furukawa T, Tanigawa T. Tracheal sound analysis using a deep neural network to detect sleep apnea. J Clin Sleep Med. 2019;15(8): 1125-1133.
Collapse
Affiliation(s)
- Hiroshi Nakano
- Sleep Disorders Centre, National Hospital Organization Fukuoka National Hospital, Yakatabaru, Minmi-ku, Fukuoka City, Japan
| | - Tomokazu Furukawa
- Sleep Disorders Centre, National Hospital Organization Fukuoka National Hospital, Yakatabaru, Minmi-ku, Fukuoka City, Japan
| | - Takeshi Tanigawa
- Department of Public Health, Graduate School of Medicine, Juntendo University, Hongo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
11
|
OSAS assessment with entropy analysis of high resolution snoring audio signals. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101965] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
12
|
Montazeri Ghahjaverestan N, Akbarian S, Hafezi M, Saha S, Zhu K, Gavrilovic B, Taati B, Yadollahi A. Sleep/Wakefulness Detection Using Tracheal Sounds and Movements. Nat Sci Sleep 2020; 12:1009-1021. [PMID: 33235534 PMCID: PMC7680175 DOI: 10.2147/nss.s276107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Accepted: 10/08/2020] [Indexed: 11/23/2022] Open
Abstract
PURPOSE The current gold standard to detect sleep/wakefulness is based on electroencephalogram, which is inconvenient if included in portable sleep screening devices. Therefore, a challenge in the portable devices is sleeping time estimation. Without sleeping time, sleep parameters such as apnea/hypopnea index (AHI), an index for quantifying sleep apnea severity, can be underestimated. Recent studies have used tracheal sounds and movements for sleep screening and calculating AHI without considering sleeping time. In this study, we investigated the detection of sleep/wakefulness states and estimation of sleep parameters using tracheal sounds and movements. MATERIALS AND METHODS Participants with suspected sleep apnea who were referred for sleep screening were included in this study. Simultaneously with polysomnography, tracheal sounds and movements were recorded with a small wearable device, called the Patch, attached over the trachea. Each 30-second epoch of tracheal data was scored as sleep or wakefulness using an automatic classification algorithm. The performance of the algorithm was compared to the sleep/wakefulness scored blindly based on the polysomnography. RESULTS Eighty-eight subjects were included in this study. The accuracy of sleep/wakefulness detection was 82.3±8.66% with a sensitivity of 87.8±10.8 % (sleep), specificity of 71.4±18.5% (awake), F1 of 88.1±9.3% and Cohen's kappa of 0.54. The correlations between the estimated and polysomnography-based measures for total sleep time and sleep efficiency were 0.78 (p<0.001) and 0.70 (p<0.001), respectively. CONCLUSION Sleep/wakefulness periods can be detected using tracheal sound and movements. The results of this study combined with our previous studies on screening sleep apnea with tracheal sounds provide strong evidence that respiratory sounds analysis can be used to develop robust, convenient and cost-effective portable devices for sleep apnea monitoring.
Collapse
Affiliation(s)
- Nasim Montazeri Ghahjaverestan
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Sina Akbarian
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Maziar Hafezi
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Shumit Saha
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Kaiyin Zhu
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Bojan Gavrilovic
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Babak Taati
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada.,Computer Science, University of Toronto, Toronto, ON, Canada
| | - Azadeh Yadollahi
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
13
|
Sleep stage classification from heart-rate variability using long short-term memory neural networks. Sci Rep 2019; 9:14149. [PMID: 31578345 PMCID: PMC6775145 DOI: 10.1038/s41598-019-49703-y] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 07/10/2019] [Indexed: 01/29/2023] Open
Abstract
Automated sleep stage classification using heart rate variability (HRV) may provide an ergonomic and low-cost alternative to gold standard polysomnography, creating possibilities for unobtrusive home-based sleep monitoring. Current methods however are limited in their ability to take into account long-term sleep architectural patterns. A long short-term memory (LSTM) network is proposed as a solution to model long-term cardiac sleep architecture information and validated on a comprehensive data set (292 participants, 584 nights, 541.214 annotated 30 s sleep segments) comprising a wide range of ages and pathological profiles, annotated according to the Rechtschaffen and Kales (R&K) annotation standard. It is shown that the model outperforms state-of-the-art approaches which were often limited to non-temporal or short-term recurrent classifiers. The model achieves a Cohen’s k of 0.61 ± 0.15 and accuracy of 77.00 ± 8.90% across the entire database. Further analysis revealed that the performance for individuals aged 50 years and older may decline. These results demonstrate the merit of deep temporal modelling using a diverse data set and advance the state-of-the-art for HRV-based sleep stage classification. Further research is warranted into individuals over the age of 50 as performance tends to worsen in this sub-population.
Collapse
|
14
|
Xue B, Deng B, Hong H, Wang Z, Zhu X, Feng DD. Non-Contact Sleep Stage Detection Using Canonical Correlation Analysis of Respiratory Sound. IEEE J Biomed Health Inform 2019; 24:614-625. [PMID: 30990201 DOI: 10.1109/jbhi.2019.2910566] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Respiratory sound is able to differentiate sleep stages and provide a non-contact and cost-effective solution for the diagnosis and treatment monitoring of sleep-related diseases. While most of the existing respiratory sound-based methods focus on a limited number of sleep stages such as sleep/wake and wake/rapid eye movement (REM)/non-REM, it is essential to detect sleep stages at a finer level for sleep quality evaluation. In this paper, we for the first time study a sleep stage detection method aiming at classifying sleep states into four sleep stages: wake, REM, light sleep, and deep sleep from the respiratory sound. In addition to extracting time-domain features, frequency-domain features of respiratory sound, non-linear features of snoring sound are devised to better characterize snoring-related signals of respiratory sound. To effectively fuse the three sets of features, a novel feature fusion technique combining the generalized canonical correlation analysis with the ReliefF algorithm is proposed for discriminative feature selection. Final stage detection is achieved with popular classifiers including decision tree, support vector machines, K-nearest neighbor, and the ensemble classifier. To evaluate our proposed method, we built an in-house dataset, which is comprised of 13 nights of sleep audio data from a sleep laboratory. Experimental results indicate that our proposed method outperforms the existing related ones and is promising for large-scale non-contact sleep monitoring.
Collapse
|
15
|
Dafna E, Tarasiuk A, Zigel Y. Sleep staging using nocturnal sound analysis. Sci Rep 2018; 8:13474. [PMID: 30194402 PMCID: PMC6128888 DOI: 10.1038/s41598-018-31748-0] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 08/22/2018] [Indexed: 01/19/2023] Open
Abstract
Sleep staging is essential for evaluating sleep and its disorders. Most sleep studies today incorporate contact sensors that may interfere with natural sleep and may bias results. Moreover, the availability of sleep studies is limited, and many people with sleep disorders remain undiagnosed. Here, we present a pioneering approach for rapid eye movement (REM), non-REM, and wake staging (macro-sleep stages, MSS) estimation based on sleep sounds analysis. Our working hypothesis is that the properties of sleep sounds, such as breathing and movement, within each MSS are different. We recorded audio signals, using non-contact microphones, of 250 patients referred to a polysomnography (PSG) study in a sleep laboratory. We trained an ensemble of one-layer, feedforward neural network classifiers fed by time-series of sleep sounds to produce real-time and offline analyses. The audio-based system was validated and produced an epoch-by-epoch (standard 30-sec segments) agreement with PSG of 87% with Cohen's kappa of 0.7. This study shows the potential of audio signal analysis as a simple, convenient, and reliable MSS estimation without contact sensors.
Collapse
Affiliation(s)
- Eliran Dafna
- Department of Biomedical Engineering, Faculty of Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
| | - Ariel Tarasiuk
- Sleep-Wake Disorders Unit, Soroka University Medical Center, and Department of Physiology, Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Yaniv Zigel
- Department of Biomedical Engineering, Faculty of Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
16
|
Detection of sleep breathing sound based on artificial neural network analysis. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2017.11.005] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
17
|
Shabtai NR, Zigel Y. Spatial acoustic radiation of respiratory sounds for sleep evaluation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:1291. [PMID: 28964100 DOI: 10.1121/1.4999319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Body posture has an effect on sleeping quality and breathing disorders and therefore it is important to be recognized for the completion of the sleep evaluation process. Since humans have a directional acoustic radiation pattern, it is hypothesized that microphone arrays can be used to recognize different body postures, which is highly practical for sleep evaluation applications that already measure respiratory sounds using distant microphones. Furthermore, body posture may have an effect on distant microphone measurement; hence, the measurement can be compensated if the body posture is correctly recognized. A spherical harmonics decomposition approach to the spatial acoustic radiation is presented, assuming an array of eight microphones in a medium-sized audiology booth. The spatial sampling and reconstruction of the radiation pattern is discussed, and a final setup for the microphone array is recommended. A case study is shown using recorded segments of snoring and breathing sounds of three human subjects in three body postures in a silent but not anechoic audiology booth.
Collapse
Affiliation(s)
- Noam R Shabtai
- Department of Biomedical Engineering, Faculty of Engineering Sciences, Ben-Gurion University of the Negev, P.O.B. 653, Beer-Sheva 8410501, Israel
| | - Yaniv Zigel
- Department of Biomedical Engineering, Faculty of Engineering Sciences, Ben-Gurion University of the Negev, P.O.B. 653, Beer-Sheva 8410501, Israel
| |
Collapse
|
18
|
Noncontact Sleep Study by Multi-Modal Sensor Fusion. SENSORS 2017; 17:s17071685. [PMID: 28753994 PMCID: PMC5539697 DOI: 10.3390/s17071685] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2017] [Revised: 07/14/2017] [Accepted: 07/20/2017] [Indexed: 11/17/2022]
Abstract
Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven reliable. In addition, most of the products are designed for healthy customers rather than for patients with sleep disorder. We present a novel approach to classify sleep stages via low cost and noncontact multi-modal sensor fusion, which extracts sleep-related vital signals from radar signals and a sound-based context-awareness technique. This work is uniquely designed based on the PSG data of sleep disorder patients, which were received and certified by professionals at Hanyang University Hospital. The proposed algorithm further incorporates medical/statistical knowledge to determine personal-adjusted thresholds and devise post-processing. The efficiency of the proposed algorithm is highlighted by contrasting sleep stage classification performance between single sensor and sensor-fusion algorithms. To validate the possibility of commercializing this work, the classification results of this algorithm were compared with the commercialized sleep monitoring device, ResMed S+. The proposed algorithm was investigated with random patients following PSG examination, and results show a promising novel approach for determining sleep stages in a low cost and unobtrusive manner.
Collapse
|
19
|
Dafna E, Halevi M, Ben Or D, Tarasiuk A, Zigel Y. Estimation of macro sleep stages from whole night audio analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:2847-2850. [PMID: 28268910 DOI: 10.1109/embc.2016.7591323] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
During routine sleep diagnostic procedure, sleep is broadly divided into three states: rapid eye movement (REM), non-REM (NREM) states, and wake, frequently named macro-sleep stages (MSS). In this study, we present a pioneering attempt for MSS detection using full night audio analysis. Our working hypothesis is that there might be differences in sound properties within each MSS due to breathing efforts (or snores) and body movements in bed. In this study, audio signals of 35 patients referred to a sleep laboratory were recorded and analyzed. An additional 178 subjects were used to train a probabilistic time-series model for MSS staging across the night. The audio-based system was validated on 20 out of the 35 subjects. System accuracy for estimating (detecting) epoch-by-epoch wake/REM/NREM states for a given subject is 74% (69% for wake, 54% for REM, and 79% NREM). Mean error (absolute difference) was 36±34 min for detecting total sleep time, 17±21 min for sleep latency, 5±5% for sleep efficiency, and 7±5% for REM percentage. These encouraging results indicate that audio-based analysis can provide a simple and comfortable alternative method for ambulatory evaluation of sleep and its disorders.
Collapse
|
20
|
Levartovsky A, Dafna E, Zigel Y, Tarasiuk A. Breathing and Snoring Sound Characteristics during Sleep in Adults. J Clin Sleep Med 2017; 12:375-84. [PMID: 26518701 DOI: 10.5664/jcsm.5588] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2015] [Accepted: 09/23/2015] [Indexed: 11/13/2022]
Abstract
STUDY OBJECTIVES Sound level meter is the gold standard approach for snoring evaluation. Using this approach, it was established that snoring intensity (in dB) is higher for men and is associated with increased apnea-hypopnea index (AHI). In this study, we performed a systematic analysis of breathing and snoring sound characteristics using an algorithm designed to detect and analyze breathing and snoring sounds. The effect of sex, sleep stages, and AHI on snoring characteristics was explored. METHODS We consecutively recruited 121 subjects referred for diagnosis of obstructive sleep apnea. A whole night audio signal was recorded using noncontact ambient microphone during polysomnography. A large number (> 290,000) of breathing and snoring (> 50 dB) events were analyzed. Breathing sound events were detected using a signal-processing algorithm that discriminates between breathing and nonbreathing (noise events) sounds. RESULTS Snoring index (events/h, SI) was 23% higher for men (p = 0.04), and in both sexes SI gradually declined by 50% across sleep time (p < 0.01) independent of AHI. SI was higher in slow wave sleep (p < 0.03) compared to S2 and rapid eye movement sleep; men have higher SI in all sleep stages than women (p < 0.05). Snoring intensity was similar in both genders in all sleep stages and independent of AHI. For both sexes, no correlation was found between AHI and snoring intensity (r = 0.1, p = 0.291). CONCLUSIONS This audio analysis approach enables systematic detection and analysis of breathing and snoring sounds from a full night recording. Snoring intensity is similar in both sexes and was not affected by AHI.
Collapse
Affiliation(s)
- Asaf Levartovsky
- Sleep-Wake Disorders Unit, Soroka University Medical Center and Department of Physiology and Cell Biology, Faculty of Health Sciences, Ben-Gurion University of the Negev, Israel
| | - Eliran Dafna
- Department of Biomedical Engineering, Faculty of Engineering Sciences, Ben-Gurion University of the Negev, Israel
| | - Yaniv Zigel
- Department of Biomedical Engineering, Faculty of Engineering Sciences, Ben-Gurion University of the Negev, Israel
| | - Ariel Tarasiuk
- Sleep-Wake Disorders Unit, Soroka University Medical Center and Department of Physiology and Cell Biology, Faculty of Health Sciences, Ben-Gurion University of the Negev, Israel
| |
Collapse
|
21
|
Akhter S, Abeyratne UR. Detection of REM/NREM snores in obstructive sleep apnoea patients using a machine learning technique. Biomed Phys Eng Express 2016. [DOI: 10.1088/2057-1976/2/5/055022] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
22
|
Dafna E, Rosenwein T, Tarasiuk A, Zigel Y. Breathing rate estimation during sleep using audio signal analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:5981-4. [PMID: 26737654 DOI: 10.1109/embc.2015.7319754] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Sleep is associated with important changes in respiratory rate and ventilation. Currently, breathing rate (BR) is measured during sleep using an array of contact and wearable sensors, including airflow sensors and respiratory belts; there is need for a simplified and more comfortable approach to monitor respiration. Here, we present a new method for BR evaluation during sleep using a non-contact microphone. The basic idea behind this approach is that during sleep the upper airway becomes narrower due to muscle relaxation, which leads to louder breathing sounds that can be captured via ambient microphone. In this study we developed a signal processing algorithm that emphasizes breathing sounds, extracts breathing-related features, and estimates BR during sleep. A comparison between audio-based BR estimation and BR calculated using the traditional (gold-standard) respiratory belts during in-laboratory polysomnography (PSG) study was performed on 204 subjects. Pearson's correlation between subjects' averaged BR of the two approaches was R=0.97. Epoch-by-epoch (30 s) BR comparison revealed a mean relative error of 2.44% and Pearson's correlation of 0.68. This study shows reliable and promising results for non-contact BR estimation.
Collapse
|