1
|
Kazemi K, Abiri A, Zhou Y, Rahmani A, Khayat RN, Liljeberg P, Khine M. Improved sleep stage predictions by deep learning of photoplethysmogram and respiration patterns. Comput Biol Med 2024; 179:108679. [PMID: 39033682 DOI: 10.1016/j.compbiomed.2024.108679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 05/28/2024] [Accepted: 05/29/2024] [Indexed: 07/23/2024]
Abstract
Sleep staging is a crucial tool for diagnosing and monitoring sleep disorders, but the standard clinical approach using polysomnography (PSG) in a sleep lab is time-consuming, expensive, uncomfortable, and limited to a single night. Advancements in sensor technology have enabled home sleep monitoring, but existing devices still lack sufficient accuracy to inform clinical decisions. To address this challenge, we propose a deep learning architecture that combines a convolutional neural network and bidirectional long short-term memory to accurately classify sleep stages. By supplementing photoplethysmography (PPG) signals with respiratory sensor inputs, we demonstrated significant improvements in prediction accuracy and Cohen's kappa (k) for 2- (92.7 %; k = 0.768), 3- (80.2 %; k = 0.714), 4- (76.8 %, k = 0.550), and 5-stage (76.7 %, k = 0.616) sleep classification using raw data. This relatively translatable approach, with a less intensive AI model and leveraging only a few, inexpensive sensors, shows promise in accurately staging sleep. This has potential for diagnosing and managing sleep disorders in a more accessible and practical manner, possibly even at home.
Collapse
Affiliation(s)
| | - Arash Abiri
- Department of Biomedical Engineering, University of California Irvine, Irvine, CA, United States
| | - Yongxiao Zhou
- Department of Biomedical Engineering, University of California Irvine, Irvine, CA, United States
| | - Amir Rahmani
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States; School of Nursing, University of California, Irvine, Irvine, CA, United States
| | - Rami N Khayat
- Division of Pulmonary and Critical Care Medicine, The UCI Comprehensive Sleep Center, University of California. Irvine, Newport Beach, CA, United States
| | | | - Michelle Khine
- Department of Biomedical Engineering, University of California Irvine, Irvine, CA, United States.
| |
Collapse
|
2
|
Vaussenat F, Bhattacharya A, Boudreau P, Boivin DB, Gagnon G, Cloutier SG. Derivative Method to Detect Sleep and Awake States through Heart Rate Variability Analysis Using Machine Learning Algorithms. SENSORS (BASEL, SWITZERLAND) 2024; 24:4317. [PMID: 39001096 PMCID: PMC11243930 DOI: 10.3390/s24134317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 06/18/2024] [Accepted: 06/25/2024] [Indexed: 07/16/2024]
Abstract
Sleep disorders can have harmful consequences in both the short and long term. They can lead to attention deficits, as well as cardiac, neurological and behavioral repercussions. One of the most widely used methods for assessing sleep disorders is polysomnography (PSG). A major challenge associated with this method is all the cables needed to connect the recording devices, making the examination more intrusive and usually requiring a clinical environment. This can have potential consequences on the test results and their accuracy. One simple way to assess the state of the central nervous system (CNS), a well-known indicator of sleep disorder, could be the use of a portable medical device. With this in mind, we implemented a simple model using both the RR interval (RRI) and its second derivative to accurately predict the awake and napping states of a subject using a feature classification model. For training and validation, we used a database providing measurements from nine healthy young adults (six men and three women), in which heart rate variability (HRV) associated with light-on, light-off, sleep onset and sleep offset events. Results show that using a 30 min RRI time series window suffices for this lightweight model to accurately predict whether the patient was awake or napping.
Collapse
Affiliation(s)
- Fabrice Vaussenat
- Department of Electrical Engineering, École de Technologie Supérieure, Université du Québec, Montréal, QC H3C 1K3, Canada; (F.V.); (A.B.); (G.G.)
| | - Abhiroop Bhattacharya
- Department of Electrical Engineering, École de Technologie Supérieure, Université du Québec, Montréal, QC H3C 1K3, Canada; (F.V.); (A.B.); (G.G.)
| | - Philippe Boudreau
- Centre for Study and Treatment of Circadian Rhythms, Douglas Mental Health University Institute, Department of Psychiatry, McGill University, Montréal, QC H4H 1R3, Canada; (P.B.); (D.B.B.)
| | - Diane B. Boivin
- Centre for Study and Treatment of Circadian Rhythms, Douglas Mental Health University Institute, Department of Psychiatry, McGill University, Montréal, QC H4H 1R3, Canada; (P.B.); (D.B.B.)
| | - Ghyslain Gagnon
- Department of Electrical Engineering, École de Technologie Supérieure, Université du Québec, Montréal, QC H3C 1K3, Canada; (F.V.); (A.B.); (G.G.)
| | - Sylvain G. Cloutier
- Department of Electrical Engineering, École de Technologie Supérieure, Université du Québec, Montréal, QC H3C 1K3, Canada; (F.V.); (A.B.); (G.G.)
| |
Collapse
|
3
|
Fei K, Wang J, Pan L, Wang X, Chen B. A sleep staging model on wavelet-based adaptive spectrogram reconstruction and light weight CNN. Comput Biol Med 2024; 173:108300. [PMID: 38547654 DOI: 10.1016/j.compbiomed.2024.108300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 02/28/2024] [Accepted: 03/12/2024] [Indexed: 04/17/2024]
Abstract
Effective methods for automatic sleep staging are important for diagnosis and treatment of sleep disorders. EEG has weak signal properties and complex frequency components during the transition of sleep stages. Wavelet-based adaptive spectrogram reconstruction (WASR) by seed growth is utilized to capture dominant time-frequency patterns of sleep EEG. We introduced variant energy from Teager operator in WASR to capture hidden dynamic patterns of EEG, which produced additional spectrograms. These spectrograms enabled a light weight CNN to detect and extract finer details of different sleep stages, which improved the feature representation of EEG. With specially designed depthwise separable convolution, the light weight CNN achieved more robust sleep stage classification. Experimental results on Sleep-EDF 20 dataset showed that our proposed model yielded overall accuracy of 87.6%, F1-score of 82.1%, and Cohen kappa of 0.83, which is competitive compared with baselines with reduced computation cost.
Collapse
Affiliation(s)
- Keling Fei
- School of Mechanical Engineering, Changzhou University, Changzhou 213164, China.
| | - Jianghui Wang
- School of Mechanical Engineering, Changzhou University, Changzhou 213164, China
| | - Lizhen Pan
- School of Mechanical Engineering, Changzhou University, Changzhou 213164, China
| | - Xu Wang
- Gansu Provincial Maternity and Child-care Hospital, Lanzhou, 730070, China
| | - Baohong Chen
- School of Mechanical Engineering, Changzhou University, Changzhou 213164, China
| |
Collapse
|
4
|
Yun R, Rembado I, Perlmutter SI, Rao RPN, Fetz EE. Local field potentials and single unit dynamics in motor cortex of unconstrained macaques during different behavioral states. Front Neurosci 2023; 17:1273627. [PMID: 38075283 PMCID: PMC10702227 DOI: 10.3389/fnins.2023.1273627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 11/09/2023] [Indexed: 02/12/2024] Open
Abstract
Different sleep stages have been shown to be vital for a variety of brain functions, including learning, memory, and skill consolidation. However, our understanding of neural dynamics during sleep and the role of prominent LFP frequency bands remain incomplete. To elucidate such dynamics and differences between behavioral states we collected multichannel LFP and spike data in primary motor cortex of unconstrained macaques for up to 24 h using a head-fixed brain-computer interface (Neurochip3). Each 8-s bin of time was classified into awake-moving (Move), awake-resting (Rest), REM sleep (REM), or non-REM sleep (NREM) by using dimensionality reduction and clustering on the average spectral density and the acceleration of the head. LFP power showed high delta during NREM, high theta during REM, and high beta when the animal was awake. Cross-frequency phase-amplitude coupling typically showed higher coupling during NREM between all pairs of frequency bands. Two notable exceptions were high delta-high gamma and theta-high gamma coupling during Move, and high theta-beta coupling during REM. Single units showed decreased firing rate during NREM, though with increased short ISIs compared to other states. Spike-LFP synchrony showed high delta synchrony during Move, and higher coupling with all other frequency bands during NREM. These results altogether reveal potential roles and functions of different LFP bands that have previously been unexplored.
Collapse
Affiliation(s)
- Richy Yun
- Department of Bioengineering, University of Washington, Seattle, WA, United States
- Center for Neurotechnology, University of Washington, Seattle, WA, United States
- Washington National Primate Research Center, University of Washington, Seattle, WA, United States
| | - Irene Rembado
- Washington National Primate Research Center, University of Washington, Seattle, WA, United States
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, United States
| | - Steve I. Perlmutter
- Center for Neurotechnology, University of Washington, Seattle, WA, United States
- Washington National Primate Research Center, University of Washington, Seattle, WA, United States
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, United States
| | - Rajesh P. N. Rao
- Center for Neurotechnology, University of Washington, Seattle, WA, United States
- Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States
| | - Eberhard E. Fetz
- Department of Bioengineering, University of Washington, Seattle, WA, United States
- Center for Neurotechnology, University of Washington, Seattle, WA, United States
- Washington National Primate Research Center, University of Washington, Seattle, WA, United States
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, United States
| |
Collapse
|
5
|
Lepage KQ, Jain S, Kvavilashvili A, Witcher M, Vijayan S. Unsupervised Multitaper Spectral Method for Identifying REM Sleep in Intracranial EEG Recordings Lacking EOG/EMG Data. Bioengineering (Basel) 2023; 10:1009. [PMID: 37760111 PMCID: PMC10525760 DOI: 10.3390/bioengineering10091009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 08/10/2023] [Accepted: 08/15/2023] [Indexed: 09/29/2023] Open
Abstract
A large number of human intracranial EEG (iEEG) recordings have been collected for clinical purposes, in institutions all over the world, but the vast majority of these are unaccompanied by EOG and EMG recordings which are required to separate Wake episodes from REM sleep using accepted methods. In order to make full use of this extremely valuable data, an accurate method of classifying sleep from iEEG recordings alone is required. Existing methods of sleep scoring using only iEEG recordings accurately classify all stages of sleep, with the exception that wake (W) and rapid-eye movement (REM) sleep are not well distinguished. A novel multitaper (Wake vs. REM) alpha-rhythm classifier is developed by generalizing K-means clustering for use with multitaper spectral eigencoefficients. The performance of this unsupervised method is assessed on eight subjects exhibiting normal sleep architecture in a hold-out analysis and is compared against a classical power detector. The proposed multitaper classifier correctly identifies 36±6 min of REM in one night of recorded sleep, while incorrectly labeling less than 10% of all labeled 30 s epochs for all but one subject (human rater reliability is estimated to be near 80%), and outperforms the equivalent statistical-power classical test. Hold-out analysis indicates that when using one night's worth of data, an accurate generalization of the method on new data is likely. For the purpose of studying sleep, the introduced multitaper alpha-rhythm classifier further paves the way to making available a large quantity of otherwise unusable IEEG data.
Collapse
Affiliation(s)
- Kyle Q. Lepage
- School of Neuroscience, Sandy Hall, Virginia Tech, 210 Drillfield Drive, Blacksburg, VA 24060, USA; (A.K.); (S.V.)
| | - Sparsh Jain
- Department of Biomedical Engineering and Mechanics, Virginia Tech, 325 Stanger St., Blacksburg, VA 24061, USA;
| | - Andrew Kvavilashvili
- School of Neuroscience, Sandy Hall, Virginia Tech, 210 Drillfield Drive, Blacksburg, VA 24060, USA; (A.K.); (S.V.)
| | - Mark Witcher
- Section of Neurosurgery, Carilion Clinic, Carilion Roanoke Memorial Hospital, 1906 Belleview Ave SE, Roanoke, VA 24014, USA;
| | - Sujith Vijayan
- School of Neuroscience, Sandy Hall, Virginia Tech, 210 Drillfield Drive, Blacksburg, VA 24060, USA; (A.K.); (S.V.)
| |
Collapse
|
6
|
Hasan MN, Koo I. Mixed-Input Deep Learning Approach to Sleep/Wake State Classification by Using EEG Signals. Diagnostics (Basel) 2023; 13:2358. [PMID: 37510104 PMCID: PMC10378260 DOI: 10.3390/diagnostics13142358] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/04/2023] [Accepted: 07/08/2023] [Indexed: 07/30/2023] Open
Abstract
Sleep stage classification plays a pivotal role in predicting and diagnosing numerous health issues from human sleep data. Manual sleep staging requires human expertise, which is occasionally prone to error and variation. In recent times, availability of polysomnography data has aided progress in automatic sleep-stage classification. In this paper, a hybrid deep learning model is proposed for classifying sleep and wake states based on a single-channel electroencephalogram (EEG) signal. The model combines an artificial neural network (ANN) and a convolutional neural network (CNN) trained using mixed-input features. The ANN makes use of statistical features calculated from EEG epochs, and the CNN operates on Hilbert spectrum images generated during each epoch. The proposed method is assessed using single-channel Pz-Oz EEG signals from the Sleep-EDF database Expanded. The classification performance on four randomly selected individuals shows that the proposed model can achieve accuracy of around 96% in classifying between sleep and wake states from EEG recordings.
Collapse
Affiliation(s)
- Md Nazmul Hasan
- Department of Electrical, Electronic and Computer Engineering, University of Ulsan, Ulsan 44610, Republic of Korea
| | - Insoo Koo
- Department of Electrical, Electronic and Computer Engineering, University of Ulsan, Ulsan 44610, Republic of Korea
| |
Collapse
|
7
|
Einizade A, Nasiri S, Sardouie SH, Clifford GD. ProductGraphSleepNet: Sleep staging using product spatio-temporal graph learning with attentive temporal aggregation. Neural Netw 2023; 164:667-680. [PMID: 37245479 DOI: 10.1016/j.neunet.2023.05.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 02/23/2023] [Accepted: 05/09/2023] [Indexed: 05/30/2023]
Abstract
The classification of sleep stages plays a crucial role in understanding and diagnosing sleep pathophysiology. Sleep stage scoring relies heavily on visual inspection by an expert, which is a time-consuming and subjective procedure. Recently, deep learning neural network approaches have been leveraged to develop a generalized automated sleep staging and account for shifts in distributions that may be caused by inherent inter/intra-subject variability, heterogeneity across datasets, and different recording environments. However, these networks (mostly) ignore the connections among brain regions and disregard modeling the connections between temporally adjacent sleep epochs. To address these issues, this work proposes an adaptive product graph learning-based graph convolutional network, named ProductGraphSleepNet, for learning joint spatio-temporal graphs along with a bidirectional gated recurrent unit and a modified graph attention network to capture the attentive dynamics of sleep stage transitions. Evaluation on two public databases: the Montreal Archive of Sleep Studies (MASS) SS3; and the SleepEDF, which contain full night polysomnography recordings of 62 and 20 healthy subjects, respectively, demonstrates performance comparable to the state-of-the-art (Accuracy: 0.867;0.838, F1-score: 0.818;0.774 and Kappa: 0.802;0.775, on each database respectively). More importantly, the proposed network makes it possible for clinicians to comprehend and interpret the learned spatial and temporal connectivity graphs for sleep stages.
Collapse
Affiliation(s)
- Aref Einizade
- Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran.
| | - Samaneh Nasiri
- Massachusetts General Hospital, Harvard Medical School, MA, USA
| | | | - Gari D Clifford
- Georgia Institute of Technology, GA, USA; Emory School of Medicine, GA, USA
| |
Collapse
|
8
|
Kang C, An S, Kim HJ, Devi M, Cho A, Hwang S, Lee HW. Age-integrated artificial intelligence framework for sleep stage classification and obstructive sleep apnea screening. Front Neurosci 2023; 17:1059186. [PMID: 37389364 PMCID: PMC10300414 DOI: 10.3389/fnins.2023.1059186] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Accepted: 05/03/2023] [Indexed: 07/01/2023] Open
Abstract
Introduction Sleep is an essential function to sustain a healthy life, and sleep dysfunction can cause various physical and mental issues. In particular, obstructive sleep apnea (OSA) is one of the most common sleep disorders and, if not treated in a timely manner, OSA can lead to critical problems such as hypertension or heart disease. Methods The first crucial step in evaluating individuals' quality of sleep and diagnosing sleep disorders is to classify sleep stages using polysomnographic (PSG) data including electroencephalography (EEG). To date, such sleep stage scoring has been mainly performed manually via visual inspection by experts, which is not only a time-consuming and laborious process but also may yield subjective results. Therefore, we have developed a computational framework that enables automatic sleep stage classification utilizing the power spectral density (PSD) features of sleep EEG based on three different learning algorithms: support vector machine, k-nearest neighbors, and multilayer perceptron (MLP). In particular, we propose an integrated artificial intelligence (AI) framework to further inform the risk of OSA based on the characteristics in automatically scored sleep stages. Given the previous finding that the characteristics of sleep EEG differ by age group, we employed a strategy of training age-specific models (younger and older groups) and a general model and comparing their performance. Results The performance of the younger age-specific group model was similar to that of the general model (and even higher than the general model at certain stages), but the performance of the older age-specific group model was rather low, suggesting that bias in individual variables, such as age bias, should be considered during model training. Our integrated model yielded an accuracy of 73% in sleep stage classification and 73% in OSA screening when MLP algorithm was applied, which indicates that patients with OSA could be screened with the corresponding accuracy level only with sleep EEG without respiration-related measures. Discussion The current outcomes demonstrate the feasibility of AI-based computational studies that when combined with advances in wearable devices and relevant technologies could contribute to personalized medicine by not only assessing an individuals' sleep status conveniently at home but also by alerting them to the risk of sleep disorders and enabling early intervention.
Collapse
Affiliation(s)
- Chaewon Kang
- Computational Medicine, System Health Science and Engineering Program, Ewha Womans University, Seoul, Republic of Korea
| | - Sora An
- Department of Communication Disorders, Ewha Womans University, Seoul, Republic of Korea
| | - Hyeon Jin Kim
- Department of Neurology, Korea University Ansan Hospital, Ansan, Republic of Korea
- Department of Neurology, Ewha Womans University School of Medicine, Seoul, Republic of Korea
| | - Maithreyee Devi
- Computational Medicine, System Health Science and Engineering Program, Ewha Womans University, Seoul, Republic of Korea
| | - Aram Cho
- Department of Nursing Science, Ewha Womans University, Seoul, Republic of Korea
| | - Sungeun Hwang
- Department of Neurology, Ewha Womans University Mogdong Hospital, Seoul, Republic of Korea
| | - Hyang Woon Lee
- Computational Medicine, System Health Science and Engineering Program, Ewha Womans University, Seoul, Republic of Korea
- Department of Neurology, Ewha Womans University School of Medicine, Seoul, Republic of Korea
- Department of Medical Science, Ewha Womans University School of Medicine and Ewha Medical Research Institute, Seoul, Republic of Korea
| |
Collapse
|
9
|
Song TA, Chowdhury SR, Malekzadeh M, Harrison S, Hoge TB, Redline S, Stone KL, Saxena R, Purcell SM, Dutta J. AI-Driven sleep staging from actigraphy and heart rate. PLoS One 2023; 18:e0285703. [PMID: 37195925 PMCID: PMC10191307 DOI: 10.1371/journal.pone.0285703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 05/02/2023] [Indexed: 05/19/2023] Open
Abstract
Sleep is an important indicator of a person's health, and its accurate and cost-effective quantification is of great value in healthcare. The gold standard for sleep assessment and the clinical diagnosis of sleep disorders is polysomnography (PSG). However, PSG requires an overnight clinic visit and trained technicians to score the obtained multimodality data. Wrist-worn consumer devices, such as smartwatches, are a promising alternative to PSG because of their small form factor, continuous monitoring capability, and popularity. Unlike PSG, however, wearables-derived data are noisier and far less information-rich because of the fewer number of modalities and less accurate measurements due to their small form factor. Given these challenges, most consumer devices perform two-stage (i.e., sleep-wake) classification, which is inadequate for deep insights into a person's sleep health. The challenging multi-class (three, four, or five-class) staging of sleep using data from wrist-worn wearables remains unresolved. The difference in the data quality between consumer-grade wearables and lab-grade clinical equipment is the motivation behind this study. In this paper, we present an artificial intelligence (AI) technique termed sequence-to-sequence LSTM for automated mobile sleep staging (SLAMSS), which can perform three-class (wake, NREM, REM) and four-class (wake, light, deep, REM) sleep classification from activity (i.e., wrist-accelerometry-derived locomotion) and two coarse heart rate measures-both of which can be reliably obtained from a consumer-grade wrist-wearable device. Our method relies on raw time-series datasets and obviates the need for manual feature selection. We validated our model using actigraphy and coarse heart rate data from two independent study populations: the Multi-Ethnic Study of Atherosclerosis (MESA; N = 808) cohort and the Osteoporotic Fractures in Men (MrOS; N = 817) cohort. SLAMSS achieves an overall accuracy of 79%, weighted F1 score of 0.80, 77% sensitivity, and 89% specificity for three-class sleep staging and an overall accuracy of 70-72%, weighted F1 score of 0.72-0.73, 64-66% sensitivity, and 89-90% specificity for four-class sleep staging in the MESA cohort. It yielded an overall accuracy of 77%, weighted F1 score of 0.77, 74% sensitivity, and 88% specificity for three-class sleep staging and an overall accuracy of 68-69%, weighted F1 score of 0.68-0.69, 60-63% sensitivity, and 88-89% specificity for four-class sleep staging in the MrOS cohort. These results were achieved with feature-poor inputs with a low temporal resolution. In addition, we extended our three-class staging model to an unrelated Apple Watch dataset. Importantly, SLAMSS predicts the duration of each sleep stage with high accuracy. This is especially significant for four-class sleep staging, where deep sleep is severely underrepresented. We show that, by appropriately choosing the loss function to address the inherent class imbalance, our method can accurately estimate deep sleep time (SLAMSS/MESA: 0.61±0.69 hours, PSG/MESA ground truth: 0.60±0.60 hours; SLAMSS/MrOS: 0.53±0.66 hours, PSG/MrOS ground truth: 0.55±0.57 hours;). Deep sleep quality and quantity are vital metrics and early indicators for a number of diseases. Our method, which enables accurate deep sleep estimation from wearables-derived data, is therefore promising for a variety of clinical applications requiring long-term deep sleep monitoring.
Collapse
Affiliation(s)
- Tzu-An Song
- University of Massachusetts Amherst, Amherst, MA, United States of America
| | | | - Masoud Malekzadeh
- University of Massachusetts Amherst, Amherst, MA, United States of America
| | - Stephanie Harrison
- California Pacific Medical Center Research Institute, San Francisco, CA, United States of America
| | - Terri Blackwell Hoge
- California Pacific Medical Center Research Institute, San Francisco, CA, United States of America
| | - Susan Redline
- Brigham and Women’s Hospital, Boston, MA, United States of America
| | - Katie L. Stone
- California Pacific Medical Center Research Institute, San Francisco, CA, United States of America
| | - Richa Saxena
- Massachusetts General Hospital, Boston, MA, United States of America
| | - Shaun M. Purcell
- Brigham and Women’s Hospital, Boston, MA, United States of America
| | - Joyita Dutta
- University of Massachusetts Amherst, Amherst, MA, United States of America
| |
Collapse
|
10
|
Zhu H, Fu C, Shu F, Yu H, Chen C, Chen W. The Effect of Coupled Electroencephalography Signals in Electrooculography Signals on Sleep Staging Based on Deep Learning Methods. Bioengineering (Basel) 2023; 10:bioengineering10050573. [PMID: 37237643 DOI: 10.3390/bioengineering10050573] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/20/2023] [Accepted: 04/26/2023] [Indexed: 05/28/2023] Open
Abstract
The influence of the coupled electroencephalography (EEG) signal in electrooculography (EOG) on EOG-based automatic sleep staging has been ignored. Since the EOG and prefrontal EEG are collected at close range, it is not clear whether EEG couples in EOG or not, and whether or not the EOG signal can achieve good sleep staging results due to its intrinsic characteristics. In this paper, the effect of a coupled EEG signal in an EOG signal on automatic sleep staging is explored. The blind source separation algorithm was used to extract a clean prefrontal EEG signal. Then the raw EOG signal and clean prefrontal EEG signal were processed to obtain EOG signals coupled with different EEG signal contents. Afterwards, the coupled EOG signals were fed into a hierarchical neural network, including a convolutional neural network and recurrent neural network for automatic sleep staging. Finally, an exploration was performed using two public datasets and one clinical dataset. The results showed that using a coupled EOG signal could achieve an accuracy of 80.4%, 81.1%, and 78.9% for the three datasets, slightly better than the accuracy of sleep staging using the EOG signal without coupled EEG. Thus, an appropriate content of coupled EEG signal in an EOG signal improved the sleep staging results. This paper provides an experimental basis for sleep staging with EOG signals.
Collapse
Affiliation(s)
- Hangyu Zhu
- School of Information Science and Technology, Fudan University, Shanghai 200433, China
| | - Cong Fu
- Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Feng Shu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Huan Yu
- Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Chen Chen
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - Wei Chen
- School of Information Science and Technology, Fudan University, Shanghai 200433, China
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| |
Collapse
|
11
|
A Long Short-Term Memory Network Using Resting-State Electroencephalogram to Predict Outcomes Following Moderate Traumatic Brain Injury. COMPUTERS 2023. [DOI: 10.3390/computers12020045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/22/2023]
Abstract
Although traumatic brain injury (TBI) is a global public health issue, not all injuries necessitate additional hospitalisation. Thinking, memory, attention, personality, and movement can all be negatively impacted by TBI. However, only a small proportion of nonsevere TBIs necessitate prolonged observation. Clinicians would benefit from an electroencephalography (EEG)-based computational intelligence model for outcome prediction by having access to an evidence-based analysis that would allow them to securely discharge patients who are at minimal risk of TBI-related mortality. Despite the increasing popularity of EEG-based deep learning research to create predictive models with breakthrough performance, particularly in epilepsy prediction, its use in clinical decision making for the diagnosis and prognosis of TBI has not been as widely exploited. Therefore, utilising 60s segments of unprocessed resting-state EEG data as input, we suggest a long short-term memory (LSTM) network that can distinguish between improved and unimproved outcomes in moderate TBI patients. Complex feature extraction and selection are avoided in this architecture. The experimental results show that, with a classification accuracy of 87.50 ± 0.05%, the proposed prognostic model outperforms three related works. The results suggest that the proposed methodology is an efficient and reliable strategy to assist clinicians in creating an automated tool for predicting treatment outcomes from EEG signals.
Collapse
|
12
|
Migovich M, Ullal A, Fu C, Peters SU, Sarkar N. Feasibility of wearable devices and machine learning for sleep classification in children with Rett syndrome: A pilot study. Digit Health 2023; 9:20552076231191622. [PMID: 37545628 PMCID: PMC10399268 DOI: 10.1177/20552076231191622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 07/13/2023] [Indexed: 08/08/2023] Open
Abstract
Sleep is vital to many processes involved in the well-being and health of children; however, it is estimated that 80% of children with Rett syndrome suffer from sleep disorders. Caregiver reports and questionnaires, which are the current method of studying sleep, are prone to observer bias and missed information. Polysomnography is considered the gold standard for sleep analysis but is labor and cost-intensive and limits the frequency of data collection for sleep disorder studies. Wearable digital health technologies, such as actigraphy devices, have shown potential and feasibility as a method for sleep analysis in Rett syndrome, but have not been validated against polysomnography. Furthermore, the collected accelerometer data has limitations due to the rigidity, periodic limb movement, and involuntary muscle contractions prevalent in Rett syndrome. Heart rate and electrodermal activity, along with other physiological signals, have been linked to sleep stages and can be utilized with machine learning to provide better resistance to noise and false positives than actigraphy. This research aims to address the gap in Rett syndrome sleep analysis by comparing the performance of a machine learning model utilizing both accelerometer data and physiological data features to the gold-standard polysomnography for sleep analysis in Rett syndrome. Our analytical validation pilot study (n = 7) found that using physiological and accelerometer features, our machine learning models can differentiate between awake, non-rapid eye movement sleep, and rapid eye movement sleep in Rett syndrome children with an accuracy of 85.1% when using an individual model. Additionally, this work demonstrates that it is feasible to use digital health technologies in Rett syndrome, even at a young age, without data loss or interference from repetitive movements that are characteristic of Rett syndrome.
Collapse
Affiliation(s)
- Miroslava Migovich
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN,USA
| | - Akshith Ullal
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Cary Fu
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Sarika U Peters
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Kennedy Center, Nashville, TN, USA
| | - Nilanjan Sarkar
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN,USA
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
13
|
Wang Q, Guo Y, Shen Y, Tong S, Guo H. Multi-Layer Graph Attention Network for Sleep Stage Classification Based on EEG. SENSORS (BASEL, SWITZERLAND) 2022; 22:9272. [PMID: 36501974 PMCID: PMC9735886 DOI: 10.3390/s22239272] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 11/24/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
Graph neural networks have been successfully applied to sleep stage classification, but there are still challenges: (1) How to effectively utilize epoch information of EEG-adjacent channels owing to their different interaction effects. (2) How to extract the most representative features according to confused transitional information in confused stages. (3) How to improve classification accuracy of sleep stages compared with existing models. To address these shortcomings, we propose a multi-layer graph attention network (MGANet). Node-level attention prompts the graph attention convolution and GRU to focus on and differentiate the interaction between channels in the time-frequency domain and the spatial domain, respectively. The multi-head spatial-temporal mechanism balances the channel weights and dynamically adjusts channel features, and a multi-layer graph attention network accurately expresses the spatial sleep information. Moreover, stage-level attention is applied to easily confused sleep stages, which effectively improves the limitations of a graph convolutional network in large-scale graph sleep stages. The experimental results demonstrated classification accuracy; MF1 and Kappa reached 0.825, 0.814, and 0.775 and 0.873, 0.801, and 0.827 for the ISRUC and SHHS datasets, respectively, which showed that MGANet outperformed the state-of-the-art baselines.
Collapse
Affiliation(s)
| | - Yecai Guo
- Correspondence: ; Tel.: +86-151-8980-2968
| | | | | | | |
Collapse
|