1
|
Ganglberger W, Nasiri S, Sun H, Kim S, Shin C, Westover MB, Thomas RJ. Refining sleep staging accuracy: transfer learning coupled with scorability models. Sleep 2024; 47:zsae202. [PMID: 39215679 DOI: 10.1093/sleep/zsae202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 08/04/2024] [Indexed: 09/04/2024] Open
Abstract
STUDY OBJECTIVES This study aimed to (1) improve sleep staging accuracy through transfer learning (TL), to achieve or exceed human inter-expert agreement and (2) introduce a scorability model to assess the quality and trustworthiness of automated sleep staging. METHODS A deep neural network (base model) was trained on a large multi-site polysomnography (PSG) dataset from the United States. TL was used to calibrate the model to a reduced montage and limited samples from the Korean Genome and Epidemiology Study (KoGES) dataset. Model performance was compared to inter-expert reliability among three human experts. A scorability assessment was developed to predict the agreement between the model and human experts. RESULTS Initial sleep staging by the base model showed lower agreement with experts (κ = 0.55) compared to the inter-expert agreement (κ = 0.62). Calibration with 324 randomly sampled training cases matched expert agreement levels. Further targeted sampling improved performance, with models exceeding inter-expert agreement (κ = 0.70). The scorability assessment, combining biosignal quality and model confidence features, predicted model-expert agreement moderately well (R² = 0.42). Recordings with higher scorability scores demonstrated greater model-expert agreement than inter-expert agreement. Even with lower scorability scores, model performance was comparable to inter-expert agreement. CONCLUSIONS Fine-tuning a pretrained neural network through targeted TL significantly enhances sleep staging performance for an atypical montage, achieving and surpassing human expert agreement levels. The introduction of a scorability assessment provides a robust measure of reliability, ensuring quality control and enhancing the practical application of the system before deployment. This approach marks an important advancement in automated sleep analysis, demonstrating the potential for AI to exceed human performance in clinical settings.
Collapse
Affiliation(s)
- Wolfgang Ganglberger
- Department of Neurology, Beth Israel Deaconess Medical Center, Boston, MA, USA
- McCance Center for Brain Health, Massachusetts General Hospital, Boston, MA, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, MA, USA
| | - Samaneh Nasiri
- McCance Center for Brain Health, Massachusetts General Hospital, Boston, MA, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, MA, USA
- Biomedical Informatics & Neurology, Emory School of Medicine, Atlanta, GA, USA
| | - Haoqi Sun
- Department of Neurology, Beth Israel Deaconess Medical Center, Boston, MA, USA
- McCance Center for Brain Health, Massachusetts General Hospital, Boston, MA, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, MA, USA
| | - Soriul Kim
- Institute of Human Genomic Study, College of Medicine, Kore University, Seoul, Republic of Korea
| | - Chol Shin
- Institute of Human Genomic Study, College of Medicine, Kore University, Seoul, Republic of Korea
- Biomedical Research Center, Korea University Ansan Hospital, Ansan, Republic of Korea
| | - M Brandon Westover
- Department of Neurology, Beth Israel Deaconess Medical Center, Boston, MA, USA
- McCance Center for Brain Health, Massachusetts General Hospital, Boston, MA, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, MA, USA
| | - Robert J Thomas
- Division of Sleep Medicine, Harvard Medical School, Boston, MA, USA
- Division of Pulmonary Critical Care & Sleep Medicine, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
| |
Collapse
|
2
|
Lorenzen KP, Heremans ERM, de Vos M, Mikkelsen KB. Personalization of Automatic Sleep Scoring: How Best to Adapt Models to Personal Domains in Wearable EEG. IEEE J Biomed Health Inform 2024; 28:5804-5815. [PMID: 38833404 DOI: 10.1109/jbhi.2024.3409165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/06/2024]
Abstract
Wearable EEG enables us to capture large amounts of high-quality sleep data for diagnostic purposes. To make full use of this capacity we need high-performance automatic sleep scoring models. To this end, it has been noted that domain mismatch between recording equipment can be considerable, e.g. PSG to wearable EEG, but a previously observed benefit from personalizing models to individual subjects further indicates a personal domain in sleep EEG. In this work, we have investigated the extent of such a personal domain in wearable EEG, and review supervised and unsupervised approaches to personalization as found in the literature. We investigated the personalization effect of the unsupervised Adversarial Domain Adaptation and implemented an unsupervised method based on statistics alignment. No beneficial personalization effect was observed using these unsupervised methods. We find that supervised personalization leads to a substantial performance improvement on the target subject ranging from 15% Cohen's Kappa for subjects with poor performance ( ) to roughly 2% on subjects with high performance ( ). This improvement was present for models trained on both small and large data sets, indicating that even high-performance models benefit from supervised personalization. We found that this personalization can be beneficially regularized using Kullback-Leibler regularization, leading to lower variance with negligible cost to improvement. Based on the experiments, we recommend model personalization using Kullback-Leibler regularization.
Collapse
|
3
|
Pradeepkumar J, Anandakumar M, Kugathasan V, Suntharalingham D, Kappel SL, De Silva AC, Edussooriya CUS. Toward Interpretable Sleep Stage Classification Using Cross-Modal Transformers. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2893-2904. [PMID: 39102323 DOI: 10.1109/tnsre.2024.3438610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/07/2024]
Abstract
Accurate sleep stage classification is significant for sleep health assessment. In recent years, several machine-learning based sleep staging algorithms have been developed, and in particular, deep-learning based algorithms have achieved performance on par with human annotation. Despite improved performance, a limitation of most deep-learning based algorithms is their black-box behavior, which have limited their use in clinical settings. Here, we propose a cross-modal transformer, which is a transformer-based method for sleep stage classification. The proposed cross-modal transformer consists of a cross-modal transformer encoder architecture along with a multi-scale one-dimensional convolutional neural network for automatic representation learning. The performance of our method is on-par with the state-of-the-art methods and eliminates the black-box behavior of deep-learning models by utilizing the interpretability aspect of the attention modules. Furthermore, our method provides considerable reductions in the number of parameters and training time compared to the state-of-the-art methods. Our code is available at https://github.com/Jathurshan0330/Cross-Modal-Transformer. A demo of our work can be found at https://bit.ly/Cross_modal_transformer_demo.
Collapse
|
4
|
Oh S, Kweon YS, Shin GH, Lee SW. Association Between Sleep Quality and Deep Learning-Based Sleep Onset Latency Distribution Using an Electroencephalogram. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1806-1816. [PMID: 38696294 DOI: 10.1109/tnsre.2024.3396169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2024]
Abstract
To evaluate sleep quality, it is necessary to monitor overnight sleep duration. However, sleep monitoring typically requires more than 7 hours, which can be inefficient in termxs of data size and analysis. Therefore, we proposed to develop a deep learning-based model using a 30 sec sleep electroencephalogram (EEG) early in the sleep cycle to predict sleep onset latency (SOL) distribution and explore associations with sleep quality (SQ). We propose a deep learning model composed of a structure that decomposes and restores the signal in epoch units and a structure that predicts the SOL distribution. We used the Sleep Heart Health Study public dataset, which includes a large number of study subjects, to estimate and evaluate the proposed model. The proposed model estimated the SOL distribution and divided it into four clusters. The advantage of the proposed model is that it shows the process of falling asleep for individual participants as a probability graph over time. Furthermore, we compared the baseline of good SQ and SOL and showed that less than 10 minutes SOL correlated better with good SQ. Moreover, it was the most suitable sleep feature that could be predicted using early EEG, compared with the total sleep time, sleep efficiency, and actual sleep time. Our study showed the feasibility of estimating SOL distribution using deep learning with an early EEG and showed that SOL distribution within 10 minutes was associated with good SQ.
Collapse
|
5
|
Satapathy SK, Brahma B, Panda B, Barsocchi P, Bhoi AK. Machine learning-empowered sleep staging classification using multi-modality signals. BMC Med Inform Decis Mak 2024; 24:119. [PMID: 38711099 DOI: 10.1186/s12911-024-02522-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 04/29/2024] [Indexed: 05/08/2024] Open
Abstract
The goal is to enhance an automated sleep staging system's performance by leveraging the diverse signals captured through multi-modal polysomnography recordings. Three modalities of PSG signals, namely electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG), were considered to obtain the optimal fusions of the PSG signals, where 63 features were extracted. These include frequency-based, time-based, statistical-based, entropy-based, and non-linear-based features. We adopted the ReliefF (ReF) feature selection algorithms to find the suitable parts for each signal and superposition of PSG signals. Twelve top features were selected while correlated with the extracted feature sets' sleep stages. The selected features were fed into the AdaBoost with Random Forest (ADB + RF) classifier to validate the chosen segments and classify the sleep stages. This study's experiments were investigated by obtaining two testing schemes: epoch-wise testing and subject-wise testing. The suggested research was conducted using three publicly available datasets: ISRUC-Sleep subgroup1 (ISRUC-SG1), sleep-EDF(S-EDF), Physio bank CAP sleep database (PB-CAPSDB), and S-EDF-78 respectively. This work demonstrated that the proposed fusion strategy overestimates the common individual usage of PSG signals.
Collapse
Affiliation(s)
- Santosh Kumar Satapathy
- Department of Information and Communication Technology, Pandit Deendayal Energy University, Gandhinagar, Gujarat, 382007, India.
| | - Biswajit Brahma
- McKesson Corporation, 1 Post St, San Francisco, CA, 94104, USA
| | - Baidyanath Panda
- LTIMindtree, 1 American Row, 3Rd Floor, Hartford, CT, 06103, USA
| | - Paolo Barsocchi
- Institute of Information Science and Technologies, National Research Council, 56124, Pisa, Italy.
| | - Akash Kumar Bhoi
- Directorate of Research, Sikkim Manipal University, Gangtok, 737102, Sikkim, India.
| |
Collapse
|
6
|
Barmpas K, Panagakis Y, Zoumpourlis G, Adamos DA, Laskaris N, Zafeiriou S. A causal perspective on brainwave modeling for brain-computer interfaces. J Neural Eng 2024; 21:036001. [PMID: 38621380 DOI: 10.1088/1741-2552/ad3eb5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 04/15/2024] [Indexed: 04/17/2024]
Abstract
Objective. Machine learning (ML) models have opened up enormous opportunities in the field of brain-computer Interfaces (BCIs). Despite their great success, they usually face severe limitations when they are employed in real-life applications outside a controlled laboratory setting.Approach. Mixing causal reasoning, identifying causal relationships between variables of interest, with brainwave modeling can change one's viewpoint on some of these major challenges which can be found in various stages in the ML pipeline, ranging from data collection and data pre-processing to training methods and techniques.Main results. In this work, we employ causal reasoning and present a framework aiming to breakdown and analyze important challenges of brainwave modeling for BCIs.Significance. Furthermore, we present how general ML practices as well as brainwave-specific techniques can be utilized and solve some of these identified challenges. And finally, we discuss appropriate evaluation schemes in order to measure these techniques' performance and efficiently compare them with other methods that will be developed in the future.
Collapse
Affiliation(s)
- Konstantinos Barmpas
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
- Cogitat Ltd, London, United Kingdom
| | - Yannis Panagakis
- Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens 15784, Greece
- Archimedes Research Unit, Research Center Athena, Athens 15125, Greece
- Cogitat Ltd, London, United Kingdom
| | | | - Dimitrios A Adamos
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
- Cogitat Ltd, London, United Kingdom
| | - Nikolaos Laskaris
- School of Informatics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece
- Cogitat Ltd, London, United Kingdom
| | - Stefanos Zafeiriou
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
- Cogitat Ltd, London, United Kingdom
| |
Collapse
|
7
|
An P, Zhao J, Du B, Zhao W, Zhang T, Yuan Z. Amplitude-Time Dual-View Fused EEG Temporal Feature Learning for Automatic Sleep Staging. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:6492-6506. [PMID: 36215384 DOI: 10.1109/tnnls.2022.3210384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Electroencephalogram (EEG) plays an important role in studying brain function and human cognitive performance, and the recognition of EEG signals is vital to develop an automatic sleep staging system. However, due to the complex nonstationary characteristics and the individual difference between subjects, how to obtain the effective signal features of the EEG for practical application is still a challenging task. In this article, we investigate the EEG feature learning problem and propose a novel temporal feature learning method based on amplitude-time dual-view fusion for automatic sleep staging. First, we explore the feature extraction ability of convolutional neural networks for the EEG signal from the perspective of interpretability and construct two new representation signals for the raw EEG from the views of amplitude and time. Then, we extract the amplitude-time signal features that reflect the transformation between different sleep stages from the obtained representation signals by using conventional 1-D CNNs. Furthermore, a hybrid dilation convolution module is used to learn the long-term temporal dependency features of EEG signals, which can overcome the shortcoming that the small-scale convolution kernel can only learn the local signal variation information. Finally, we conduct attention-based feature fusion for the learned dual-view signal features to further improve sleep staging performance. To evaluate the performance of the proposed method, we test 30-s-epoch EEG signal samples for healthy subjects and subjects with mild sleep disorders. The experimental results from the most commonly used datasets show that the proposed method has better sleep staging performance and has the potential for the development and application of an EEG-based automatic sleep staging system.
Collapse
|
8
|
Li J, Wu C, Pan J, Wang F. Few-shot EEG sleep staging based on transductive prototype optimization network. Front Neuroinform 2023; 17:1297874. [PMID: 38125309 PMCID: PMC10730933 DOI: 10.3389/fninf.2023.1297874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 11/13/2023] [Indexed: 12/23/2023] Open
Abstract
Electroencephalography (EEG) is a commonly used technology for monitoring brain activities and diagnosing sleep disorders. Clinically, doctors need to manually stage sleep based on EEG signals, which is a time-consuming and laborious task. In this study, we propose a few-shot EEG sleep staging termed transductive prototype optimization network (TPON) method, which aims to improve the performance of EEG sleep staging. Compared with traditional deep learning methods, TPON uses a meta-learning algorithm, which generalizes the classifier to new classes that are not visible in the training set, and only have a few examples for each new class. We learn the prototypes of existing objects through meta-training, and capture the sleep features of new objects through the "learn to learn" method of meta-learning. The prototype distribution of the class is optimized and captured by using support set and unlabeled high confidence samples to increase the authenticity of the prototype. Compared with traditional prototype networks, TPON can effectively solve too few samples in few-shot learning and improve the matching degree of prototypes in prototype network. The experimental results on the public SleepEDF-2013 dataset show that the proposed algorithm outperform than most advanced algorithms in the overall performance. In addition, we experimentally demonstrate the feasibility of cross-channel recognition, which indicates that there are many similar sleep EEG features between different channels. In future research, we can further explore the common features among different channels and investigate the combination of universal features in sleep EEG. Overall, our method achieves high accuracy in sleep stage classification, demonstrating the effectiveness of this approach and its potential applications in other medical fields.
Collapse
Affiliation(s)
| | | | | | - Fei Wang
- School of Software, South China Normal University, Guangzhou, China
| |
Collapse
|
9
|
Zan H, Yildiz A. Multi-task learning for arousal and sleep stage detection using fully convolutional networks. J Neural Eng 2023; 20:056034. [PMID: 37769664 DOI: 10.1088/1741-2552/acfe3a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 09/28/2023] [Indexed: 10/03/2023]
Abstract
Objective.Sleep is a critical physiological process that plays a vital role in maintaining physical and mental health. Accurate detection of arousals and sleep stages is essential for the diagnosis of sleep disorders, as frequent and excessive occurrences of arousals disrupt sleep stage patterns and lead to poor sleep quality, negatively impacting physical and mental health. Polysomnography is a traditional method for arousal and sleep stage detection that is time-consuming and prone to high variability among experts.Approach. In this paper, we propose a novel multi-task learning approach for arousal and sleep stage detection using fully convolutional neural networks. Our model, FullSleepNet, accepts a full-night single-channel EEG signal as input and produces segmentation masks for arousal and sleep stage labels. FullSleepNet comprises four modules: a convolutional module to extract local features, a recurrent module to capture long-range dependencies, an attention mechanism to focus on relevant parts of the input, and a segmentation module to output final predictions.Main results.By unifying the two interrelated tasks as segmentation problems and employing a multi-task learning approach, FullSleepNet achieves state-of-the-art performance for arousal detection with an area under the precision-recall curve of 0.70 on Sleep Heart Health Study and Multi-Ethnic Study of Atherosclerosis datasets. For sleep stage classification, FullSleepNet obtains comparable performance on both datasets, achieving an accuracy of 0.88 and an F1-score of 0.80 on the former and an accuracy of 0.83 and an F1-score of 0.76 on the latter.Significance. Our results demonstrate that FullSleepNet offers improved practicality, efficiency, and accuracy for the detection of arousal and classification of sleep stages using raw EEG signals as input.
Collapse
Affiliation(s)
- Hasan Zan
- Vocational School, Mardin Artuklu University, Mardin, Turkey
| | - Abdulnasır Yildiz
- Department of Electrical and Electronics Engineering, Dicle University, Diyarbakir, Turkey
| |
Collapse
|
10
|
Lee M, Kwak HG, Kim HJ, Won DO, Lee SW. SeriesSleepNet: an EEG time series model with partial data augmentation for automatic sleep stage scoring. Front Physiol 2023; 14:1188678. [PMID: 37700762 PMCID: PMC10494443 DOI: 10.3389/fphys.2023.1188678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 08/10/2023] [Indexed: 09/14/2023] Open
Abstract
Introduction: We propose an automatic sleep stage scoring model, referred to as SeriesSleepNet, based on convolutional neural network (CNN) and bidirectional long short-term memory (bi-LSTM) with partial data augmentation. We used single-channel raw electroencephalography signals for automatic sleep stage scoring. Methods: Our framework was focused on time series information, so we applied partial data augmentation to learn the connected time information in small series. In specific, the CNN module learns the time information of one epoch (intra-epoch) whereas the bi-LSTM trains the sequential information between the adjacent epochs (inter-epoch). Note that the input of the bi-LSTM is the augmented CNN output. Moreover, the proposed loss function was used to fine-tune the model by providing additional weights. To validate the proposed framework, we conducted two experiments using the Sleep-EDF and SHHS datasets. Results and Discussion: The results achieved an overall accuracy of 0.87 and 0.84 and overall F1-score of 0.80 and 0.78 and kappa value of 0.81 and 0.78 for five-class classification, respectively. We showed that the SeriesSleepNet was superior to the baselines based on each component in the proposed framework. Our architecture also outperformed the state-of-the-art methods with overall F1-score, accuracy, and kappa value. Our framework could provide information on sleep disorders or quality of sleep to automatically classify sleep stages with high performance.
Collapse
Affiliation(s)
- Minji Lee
- Department of Biomedical Software Engineering, The Catholic University of Korea, Bucheon, Republic of Korea
| | - Heon-Gyu Kwak
- Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| | - Hyeong-Jin Kim
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Dong-Ok Won
- Department of Artificial Intelligence Convergence, Hallym University, Chuncheon, Republic of Korea
| | - Seong-Whan Lee
- Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| |
Collapse
|
11
|
Liu G, Wei G, Sun S, Mao D, Zhang J, Zhao D, Tian X, Wang X, Chen N. Micro SleepNet: efficient deep learning model for mobile terminal real-time sleep staging. Front Neurosci 2023; 17:1218072. [PMID: 37575302 PMCID: PMC10416229 DOI: 10.3389/fnins.2023.1218072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 07/07/2023] [Indexed: 08/15/2023] Open
Abstract
The real-time sleep staging algorithm that can perform inference on mobile devices without burden is a prerequisite for closed-loop sleep modulation. However, current deep learning sleep staging models have poor real-time efficiency and redundant parameters. We propose a lightweight and high-performance sleep staging model named Micro SleepNet, which takes a 30-s electroencephalography (EEG) epoch as input, without relying on contextual signals. The model features a one-dimensional group convolution with a kernel size of 1 × 3 and an Efficient Channel and Spatial Attention (ECSA) module for feature extraction and adaptive recalibration. Moreover, the model efficiently performs feature fusion using dilated convolution module and replaces the conventional fully connected layer with Global Average Pooling (GAP). These design choices significantly reduce the total number of model parameters to 48,226, with only approximately 48.95 Million Floating-point Operations per Second (MFLOPs) computation. The proposed model is conducted subject-independent cross-validation on three publicly available datasets, achieving an overall accuracy of up to 83.3%, and the Cohen Kappa is 0.77. Additionally, we introduce Class Activation Mapping (CAM) to visualize the model's attention to EEG waveforms, which demonstrate the model's ability to accurately capture feature waveforms of EEG at different sleep stages. This provides a strong interpretability foundation for practical applications. Furthermore, the Micro SleepNet model occupies approximately 100 KB of memory on the Android smartphone and takes only 2.8 ms to infer one EEG epoch, meeting the real-time requirements of sleep staging tasks on mobile devices. Consequently, our proposed model has the potential to serve as a foundation for accurate closed-loop sleep modulation.
Collapse
Affiliation(s)
- Guisong Liu
- Department of Biomedical Engineering, Bioengineering College, Chongqing University, Chongqing, China
| | - Guoliang Wei
- Department of Biomedical Engineering, Bioengineering College, Chongqing University, Chongqing, China
| | - Shuqing Sun
- Department of Biomedical Engineering, Bioengineering College, Chongqing University, Chongqing, China
| | - Dandan Mao
- Department of Sleep and Psychology, Institute of Surgery Research, Daping Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Jiansong Zhang
- School of Medicine, Huaqiao University, Quanzhou, Fujian, China
| | - Dechun Zhao
- College of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Xuelong Tian
- Department of Biomedical Engineering, Bioengineering College, Chongqing University, Chongqing, China
| | - Xing Wang
- Department of Biomedical Engineering, Bioengineering College, Chongqing University, Chongqing, China
| | - Nanxi Chen
- Department of Biomedical Engineering, Bioengineering College, Chongqing University, Chongqing, China
| |
Collapse
|
12
|
Wenjian W, Qian X, Jun X, Zhikun H. DynamicSleepNet: a multi-exit neural network with adaptive inference time for sleep stage classification. Front Physiol 2023; 14:1171467. [PMID: 37250117 PMCID: PMC10213983 DOI: 10.3389/fphys.2023.1171467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 04/26/2023] [Indexed: 05/31/2023] Open
Abstract
Sleep is an essential human physiological behavior, and the quality of sleep directly affects a person's physical and mental state. In clinical medicine, sleep stage is an important basis for doctors to diagnose and treat sleep disorders. The traditional method of classifying sleep stages requires sleep experts to classify them manually, and the whole process is time-consuming and laborious. In recent years, with the help of deep learning, automatic sleep stage classification has made great progress, especially networks using multi-modal electrophysiological signals, which have greatly improved in terms of accuracy. However, we found that the existing multimodal networks have a large number of redundant calculations in the process of using multiple electrophysiological signals, and the networks become heavier due to the use of multiple signals, and difficult to be used in small devices. To solve these two problems, this paper proposes DynamicSleepNet, a network that can maximize the use of multiple electrophysiological signals and can dynamically adjust between accuracy and efficiency. DynamicSleepNet consists of three effective feature extraction modules (EFEMs) and three classifier modules, each EFEM is connected to a classifier. Each EFEM is able to extract signal features while making the effective features more prominent and the invalid features are suppressed. The samples processed by the EFEM are given to the corresponding classifier for classification, and if the classifier considers the uncertainty of the sample to be below the threshold we set, the sample can be output early without going through the whole network. We validated our model on four datasets. The results show that the highest accuracy of our model outperforms all baselines. With accuracy close to baselines, our model is faster than the baselines by a factor of several to several tens, and the number of parameters of the model is lower or close. The implementation code is available at: https://github.com/Quinella7291/A-Multi-exit-Neural-Network-with-Adaptive-Inference-Time-for-Sleep-Stage-Classification/.
Collapse
Affiliation(s)
- Wang Wenjian
- School of Information Science, Yunnan University, Kunming, China
| | | | | | | |
Collapse
|
13
|
Fiorillo L, Monachino G, van der Meer J, Pesce M, Warncke JD, Schmidt MH, Bassetti CLA, Tzovara A, Favaro P, Faraci FD. U-Sleep's resilience to AASM guidelines. NPJ Digit Med 2023; 6:33. [PMID: 36878957 PMCID: PMC9988983 DOI: 10.1038/s41746-023-00784-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 02/21/2023] [Indexed: 03/08/2023] Open
Abstract
AASM guidelines are the result of decades of efforts aiming at standardizing sleep scoring procedure, with the final goal of sharing a worldwide common methodology. The guidelines cover several aspects from the technical/digital specifications, e.g., recommended EEG derivations, to detailed sleep scoring rules accordingly to age. Automated sleep scoring systems have always largely exploited the standards as fundamental guidelines. In this context, deep learning has demonstrated better performance compared to classical machine learning. Our present work shows that a deep learning-based sleep scoring algorithm may not need to fully exploit the clinical knowledge or to strictly adhere to the AASM guidelines. Specifically, we demonstrate that U-Sleep, a state-of-the-art sleep scoring algorithm, can be strong enough to solve the scoring task even using clinically non-recommended or non-conventional derivations, and with no need to exploit information about the chronological age of the subjects. We finally strengthen a well-known finding that using data from multiple data centers always results in a better performing model compared with training on a single cohort. Indeed, we show that this latter statement is still valid even by increasing the size and the heterogeneity of the single data cohort. In all our experiments we used 28528 polysomnography studies from 13 different clinical studies.
Collapse
Affiliation(s)
- Luigi Fiorillo
- Institute of Informatics, University of Bern, Bern, Switzerland.
- Institute of Digital Technologies for Personalized Healthcare ∣ MeDiTech, Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland.
| | - Giuliana Monachino
- Institute of Informatics, University of Bern, Bern, Switzerland
- Institute of Digital Technologies for Personalized Healthcare ∣ MeDiTech, Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland
| | - Julia van der Meer
- Sleep Wake Epilepsy Center ∣ NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Marco Pesce
- Sleep Wake Epilepsy Center ∣ NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jan D Warncke
- Sleep Wake Epilepsy Center ∣ NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Markus H Schmidt
- Sleep Wake Epilepsy Center ∣ NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Claudio L A Bassetti
- Sleep Wake Epilepsy Center ∣ NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Athina Tzovara
- Institute of Informatics, University of Bern, Bern, Switzerland
- Sleep Wake Epilepsy Center ∣ NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Paolo Favaro
- Institute of Informatics, University of Bern, Bern, Switzerland
| | - Francesca D Faraci
- Institute of Digital Technologies for Personalized Healthcare ∣ MeDiTech, Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland
| |
Collapse
|
14
|
Do not sleep on traditional machine learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
15
|
Efe E, Ozsen S. CoSleepNet: Automated sleep staging using a hybrid CNN-LSTM network on imbalanced EEG-EOG datasets. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
16
|
Zan H, Yildiz A. Local Pattern Transformation-Based convolutional neural network for sleep stage scoring. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
17
|
Nazih W, Shahin M, Eldesouki MI, Ahmed B. Influence of Channel Selection and Subject's Age on the Performance of the Single Channel EEG-Based Automatic Sleep Staging Algorithms. SENSORS (BASEL, SWITZERLAND) 2023; 23:899. [PMID: 36679711 PMCID: PMC9866121 DOI: 10.3390/s23020899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 01/08/2023] [Accepted: 01/10/2023] [Indexed: 06/17/2023]
Abstract
The electroencephalogram (EEG) signal is a key parameter used to identify the different sleep stages present in an overnight sleep recording. Sleep staging is crucial in the diagnosis of several sleep disorders; however, the manual annotation of the EEG signal is a costly and time-consuming process. Automatic sleep staging algorithms offer a practical and cost-effective alternative to manual sleep staging. However, due to the limited availability of EEG sleep datasets, the reliability of existing sleep staging algorithms is questionable. Furthermore, most reported experimental results have been obtained using adult EEG signals; the effectiveness of these algorithms using pediatric EEGs is unknown. In this paper, we conduct an intensive study of two state-of-the-art single-channel EEG-based sleep staging algorithms, namely DeepSleepNet and AttnSleep, using a recently released large-scale sleep dataset collected from 3984 patients, most of whom are children. The paper studies how the performance of these sleep staging algorithms varies when applied on different EEG channels and across different age groups. Furthermore, all results were analyzed within individual sleep stages to understand how each stage is affected by the choice of EEG channel and the participants' age. The study concluded that the selection of the channel is crucial for the accuracy of the single-channel EEG-based automatic sleep staging methods. For instance, channels O1-M2 and O2-M1 performed consistently worse than other channels for both algorithms and through all age groups. The study also revealed the challenges in the automatic sleep staging of newborns and infants (1-52 weeks).
Collapse
Affiliation(s)
- Waleed Nazih
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
| | - Mostafa Shahin
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia
| | - Mohamed I. Eldesouki
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
| | - Beena Ahmed
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia
| |
Collapse
|
18
|
Sholeyan AE, Rahatabad FN, Setarehdan SK. Designing an Automatic Sleep Staging System Using Deep Convolutional Neural Network Fed by Nonlinear Dynamic Transformation. J Med Biol Eng 2022. [DOI: 10.1007/s40846-022-00771-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
19
|
Xie Z, Yang Y, Zhang Y, Wang J, Du S. Deep learning on multi-view sequential data: a survey. Artif Intell Rev 2022; 56:6661-6704. [PMID: 36466765 PMCID: PMC9707228 DOI: 10.1007/s10462-022-10332-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
With the progress of human daily interaction activities and the development of industrial society, a large amount of media data and sensor data become accessible. Humans collect these multi-source data in chronological order, called multi-view sequential data (MvSD). MvSD has numerous potential application domains, including intelligent transportation, climate science, health care, public safety and multimedia, etc. However, as the volume and scale of MvSD increases, the traditional machine learning methods become difficult to withstand such large-scale data, and it is no longer appropriate to use hand-craft features to represent these complex data. In addition, there is no general framework in the process of mining multi-view relationships and integrating multi-view information. In this paper, We first introduce four common data types that constitute MvSD, including point data, sequence data, graph data, and raster data. Then, we summarize the technical challenges of MvSD. Subsequently, we review the recent progress in deep learning technology applied to MvSD. Meanwhile, we discuss how the network represents and learns features of MvSD. Finally, we summarize the applications of MvSD in different domains and give potential research directions.
Collapse
Affiliation(s)
- Zhuyang Xie
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756 China
- Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory, Southwest Jiaotong University, Chengdu, 611756 China
| | - Yan Yang
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756 China
- Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory, Southwest Jiaotong University, Chengdu, 611756 China
| | - Yiling Zhang
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756 China
- Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory, Southwest Jiaotong University, Chengdu, 611756 China
| | - Jie Wang
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756 China
- Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory, Southwest Jiaotong University, Chengdu, 611756 China
| | - Shengdong Du
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756 China
- Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory, Southwest Jiaotong University, Chengdu, 611756 China
| |
Collapse
|
20
|
Rommel C, Paillard J, Moreau T, Gramfort A. Data augmentation for learning predictive models on EEG: a systematic comparison. J Neural Eng 2022; 19. [PMID: 36368035 DOI: 10.1088/1741-2552/aca220] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 11/11/2022] [Indexed: 11/13/2022]
Abstract
Objective.The use of deep learning for electroencephalography (EEG) classification tasks has been rapidly growing in the last years, yet its application has been limited by the relatively small size of EEG datasets. Data augmentation, which consists in artificially increasing the size of the dataset during training, can be employed to alleviate this problem. While a few augmentation transformations for EEG data have been proposed in the literature, their positive impact on performance is often evaluated on a single dataset and compared to one or two competing augmentation methods. This work proposes to better validate the existing data augmentation approaches through a unified and exhaustive analysis.Approach.We compare quantitatively 13 different augmentations with two different predictive tasks, datasets and models, using three different types of experiments.Main results.We demonstrate that employing the adequate data augmentations can bring up to 45% accuracy improvements in low data regimes compared to the same model trained without any augmentation. Our experiments also show that there is no single best augmentation strategy, as the good augmentations differ on each task.Significance.Our results highlight the best data augmentations to consider for sleep stage classification and motor imagery brain-computer interfaces. More broadly, it demonstrates that EEG classification tasks benefit from adequate data augmentation.
Collapse
Affiliation(s)
- Cédric Rommel
- Université Paris-Saclay, Inria, CEA, Palaiseau 91120, France
| | - Joseph Paillard
- Université Paris-Saclay, Inria, CEA, Palaiseau 91120, France
| | - Thomas Moreau
- Université Paris-Saclay, Inria, CEA, Palaiseau 91120, France
| | | |
Collapse
|
21
|
Kim H, Lee SM, Choi S. Automatic sleep stages classification using multi-level fusion. Biomed Eng Lett 2022; 12:413-420. [PMID: 36238370 PMCID: PMC9550904 DOI: 10.1007/s13534-022-00244-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 07/12/2022] [Accepted: 07/25/2022] [Indexed: 10/15/2022] Open
Abstract
Sleep efficiency is a factor that can determine a person's healthy life. Sleep efficiency can be calculated by analyzing the results of the sleep stage classification. There have been many studies to classify sleep stages automatically using multiple signals to improve the accuracy of the sleep stage classification. The fusion method is used to process multi-signal data. Fusion methods include data-level fusion, feature-level fusion, and decision-level fusion methods. We propose a multi-level fusion method to increase the accuracy of the sleep stage classification when using multi-signal data consisting of electroencephalography and electromyography signals. First, we used feature-level fusion to fuse the extracted features using a convolutional neural network for multi-signal data. Then, after obtaining each classified result using the fused feature data, the sleep stage was derived using a decision-level fusion method that fused classified results. We used public datasets, Sleep-EDF, to measure performance; we confirmed that the proposed multi-level fusion method yielded a higher accuracy of 87.2%, respectively, compared to single-level fusion method and more existing methods. The proposed multi-level fusion method showed the most improved performance in classifying N1 stage, where existing methods had the lowest performance.
Collapse
Affiliation(s)
- Hyungjik Kim
- Department of Secured Smart Electric Vehicle, Kookmin University, 02707 Seoul, Korea
| | - Seung Min Lee
- Department of Electrical Engineering, Kookmin University, 02707 Seoul, Korea
| | - Sunwoong Choi
- Department of Electrical Engineering, Kookmin University, 02707 Seoul, Korea
| |
Collapse
|
22
|
van Gorp H, Huijben IAM, Fonseca P, van Sloun RJG, Overeem S, van Gilst MM. Certainty about uncertainty in sleep staging: a theoretical framework. Sleep 2022; 45:6604464. [DOI: 10.1093/sleep/zsac134] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 05/12/2022] [Indexed: 11/14/2022] Open
Abstract
Abstract
Sleep stage classification is an important tool for the diagnosis of sleep disorders. Because sleep staging has such a high impact on clinical outcome, it is important that it is done reliably. However, it is known that uncertainty exists in both expert scorers and automated models. On average, the agreement between human scorers is only 82.6%. In this study, we provide a theoretical framework to facilitate discussion and further analyses of uncertainty in sleep staging. To this end, we introduce two variants of uncertainty, known from statistics and the machine learning community: aleatoric and epistemic uncertainty. We discuss what these types of uncertainties are, why the distinction is useful, where they arise from in sleep staging, and provide recommendations on how this framework can improve sleep staging in the future.
Collapse
Affiliation(s)
- Hans van Gorp
- Department of Electrical Engineering, Eindhoven University of Technology , Eindhoven , the Netherlands
- Personal Health, Philips Research , Eindhoven , the Netherlands
| | - Iris A M Huijben
- Department of Electrical Engineering, Eindhoven University of Technology , Eindhoven , the Netherlands
- Onera Health , Eindhoven , the Netherlands
| | - Pedro Fonseca
- Department of Electrical Engineering, Eindhoven University of Technology , Eindhoven , the Netherlands
- Personal Health, Philips Research , Eindhoven , the Netherlands
| | - Ruud J G van Sloun
- Department of Electrical Engineering, Eindhoven University of Technology , Eindhoven , the Netherlands
- Personal Health, Philips Research , Eindhoven , the Netherlands
| | - Sebastiaan Overeem
- Department of Electrical Engineering, Eindhoven University of Technology , Eindhoven , the Netherlands
- Sleep Medicine Centre, Kempenhaeghe Foundation , Eindhoven , the Netherlands
| | - Merel M van Gilst
- Department of Electrical Engineering, Eindhoven University of Technology , Eindhoven , the Netherlands
- Sleep Medicine Centre, Kempenhaeghe Foundation , Eindhoven , the Netherlands
| |
Collapse
|
23
|
Zou G, Liu J, Zou Q, Gao JH. A-PASS: An automated pipeline to analyze simultaneously acquired EEG-fMRI data for studying brain activities during sleep. J Neural Eng 2022; 19. [PMID: 35878599 DOI: 10.1088/1741-2552/ac83f2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Concurrent electroencephalography and functional magnetic resonance imaging (EEG-fMRI) signals can be used to uncover the nature of brain activities during sleep. However, analyzing simultaneously acquired EEG-fMRI data is extremely time consuming and experience dependent. Thus, we developed a pipeline, which we named A-PASS, to automatically analyze simultaneously acquired EEG-fMRI data for studying brain activities during sleep. APPROACH A deep learning model was trained on a sleep EEG-fMRI dataset from 45 subjects and used to perform sleep stage scoring. Various fMRI indices can be calculated with A-PASS to depict the neurophysiological characteristics across different sleep stages. We tested the performance of A-PASS on an independent sleep EEG-fMRI dataset from 28 subjects. Statistical maps regarding the main effect of sleep stages and differences between each pair of stages of fMRI indices were generated and compared using both A-PASS and manual processing methods. MAIN RESULTS The deep learning model implemented in A-PASS achieved both an accuracy and F1-score higher than 70% for sleep stage classification on EEG data acquired during fMRI scanning. The statistical maps generated from A-PASS largely resembled those produced from manually scored stages plus a combination of multiple software programs. SIGNIFICANCE A-PASS allowed efficient EEG-fMRI data processing without manual operation and could serve as a reliable and powerful tool for simultaneous EEG-fMRI studies on sleep.
Collapse
Affiliation(s)
- Guangyuan Zou
- Peking University, 5 Yiheyuan Road, Haidian District, Beijing, China, Beijing, 100871, CHINA
| | - Jiayi Liu
- Peking University, 5 Yiheyuan Road, Haidian District, Beijing, China, Beijing, 100871, CHINA
| | - Qihong Zou
- Peking University, 5 Yiheyuan Road, Haidian District, Beijing, China, Beijing, 100871, CHINA
| | - Jia-Hong Gao
- Peking University, 5 Yiheyuan Road, Haidian District, Beijing, China, Beijing, 100871, CHINA
| |
Collapse
|
24
|
Zhao C, Li J, Guo Y. SleepContextNet: A temporal context network for automatic sleep staging based single-channel EEG. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106806. [PMID: 35461126 DOI: 10.1016/j.cmpb.2022.106806] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 04/07/2022] [Accepted: 04/07/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Single-channel EEG is the most popular choice of sensing modality in sleep staging studies, because it widely conforms to the sleep staging guidelines. The current deep learning method using single-channel EEG signals for sleep staging mainly extracts the features of its surrounding epochs to obtain the short-term temporal context information of EEG epochs, and ignore the influence of the long-term temporal context information on sleep staging. However, the long-term context information includes sleep stage transition rules in a sleep cycle, which can further improve the performance of sleep staging. The aim of this research is to develop a temporal context network to capture the long-term context between EEG sleep stages. METHODS In this paper, we design a sleep staging network named SleepContextNet for sleep stage sequence. SleepContextNet can extract and utilize the long-term temporal context between consecutive EEG epochs, and combine it with the short-term context. we utilize Convolutional Neural Network(CNN) layers for learning representative features from each sleep stage and the representation features sequence learned are fed into a Recurrent Neural Network(RNN) layer for learning long-term and short-term context information among sleep stage in chronological order. In addition, we design a data augmentation algorithm for EEG to retain the long-term context information without changing the number of samples. RESULTS We evaluate the performance of our proposed network using four public datasets, the 2013 version of Sleep-EDF (SEDF), the 2018 version of Sleep-EDF Expanded (SEDFX), Sleep Heart Health Study (SHHS) and the CAP Sleep Database. The experimental results demonstrate that SleepContextNet outperforms state-of-the-art techniques in terms of different evaluation metrics by capturing long-term and short-term temporal context information. On average, accuracy of 84.8% in SEDF, 82.7% in SEDFX, 86.4% in SHHS and 78.8% in CAP are obtained under subject-independent cross validation. CONCLUSIONS The network extracts the long-term and short-term temporal context information of sleep stages from the sequence features, which utilizes the temporal dependencies among the EEG epochs effectively and improves the accuracy of sleep stages. The sleep staging method based on forward temporal context information is suitable for real-time family sleep monitoring system.
Collapse
Affiliation(s)
- Caihong Zhao
- School of Electronic and Engineer, Heilongjiang University, Harbin, 150080, China; School of Computer Science and Technology, Heilongjiang University, Harbin, 150080, China
| | - Jinbao Li
- Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan, 250353, China.
| | - Yahong Guo
- School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, 250353, China.
| |
Collapse
|
25
|
Li C, Qi Y, Ding X, Zhao J, Sang T, Lee M. A Deep Learning Method Approach for Sleep Stage Classification with EEG Spectrogram. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:6322. [PMID: 35627856 PMCID: PMC9141573 DOI: 10.3390/ijerph19106322] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 05/11/2022] [Accepted: 05/18/2022] [Indexed: 11/17/2022]
Abstract
The classification of sleep stages is an important process. However, this process is time-consuming, subjective, and error-prone. Many automated classification methods use electroencephalogram (EEG) signals for classification. These methods do not classify well enough and perform poorly in the N1 due to unbalanced data. In this paper, we propose a sleep stage classification method using EEG spectrogram. We have designed a deep learning model called EEGSNet based on multi-layer convolutional neural networks (CNNs) to extract time and frequency features from the EEG spectrogram, and two-layer bi-directional long short-term memory networks (Bi-LSTMs) to learn the transition rules between features from adjacent epochs and to perform the classification of sleep stages. In addition, to improve the generalization ability of the model, we have used Gaussian error linear units (GELUs) as the activation function of CNN. The proposed method was evaluated by four public databases, the Sleep-EDFX-8, Sleep-EDFX-20, Sleep-EDFX-78, and SHHS. The accuracy of the method is 94.17%, 86.82%, 83.02% and 85.12%, respectively, for the four datasets, the MF1 is 87.78%, 81.57%, 77.26% and 78.54%, respectively, and the Kappa is 0.91, 0.82, 0.77 and 0.79, respectively. In addition, our proposed method achieved better classification results on N1, with an F1-score of 70.16%, 52.41%, 50.03% and 47.26% for the four datasets.
Collapse
Affiliation(s)
- Chengfan Li
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China; (C.L.); (Y.Q.); (J.Z.); (T.S.)
| | - Yueyu Qi
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China; (C.L.); (Y.Q.); (J.Z.); (T.S.)
| | - Xuehai Ding
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China; (C.L.); (Y.Q.); (J.Z.); (T.S.)
| | - Junjuan Zhao
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China; (C.L.); (Y.Q.); (J.Z.); (T.S.)
| | - Tian Sang
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China; (C.L.); (Y.Q.); (J.Z.); (T.S.)
| | - Matthew Lee
- 12th Grade, The Bishop’s School, La Jolla, CA 92037, USA;
| |
Collapse
|
26
|
Zhu H, Wu Y, Shen N, Fan J, Tao L, Fu C, Yu H, Wan F, Pun SH, Chen C, Chen W. The Masking Impact of Intra-artifacts in EEG on Deep Learning-based Sleep Staging Systems: A Comparative Study. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1452-1463. [PMID: 35536800 DOI: 10.1109/tnsre.2022.3173994] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Elimination of intra-artifacts in EEG has been overlooked in most of the existing sleep staging systems, especially in deep learning-based approaches. Whether intra-artifacts, originated from the eye movement, chin muscle firing, or heart beating, etc., in EEG signals would lead to a positive or a negative masking effect on deep learning-based sleep staging systems was investigated in this paper. We systematically analyzed several traditional pre-processing methods involving fast Independent Component Analysis (FastICA), Information Maximization (Infomax), and Second-order Blind Source Separation (SOBI). On top of these methods, a SOBI-WT method based on the joint use of the SOBI and Wavelet Transform (WT) is proposed. It offered an effective solution for suppressing artifact components while retaining residual informative data. To provide a comprehensive comparative analysis, these pre-processing methods were applied to eliminate the intra-artifacts and the processed signals were fed to two ready-to-use deep learning models, namely two-step hierarchical neural network (THNN) and SimpleSleepNet for automatic sleep staging. The evaluation was performed on two widely used public datasets, Montreal Archive of Sleep Studies (MASS) and Sleep-EDF Expanded, and a clinical dataset that was collected in Huashan Hospital of Fudan University, Shanghai, China (HSFU). The proposed SOBI-WT method increased the accuracy from 79.0% to 81.3% on MASS, 83.3% to 85.7% on Sleep-EDF Expanded, and 75.5% to 77.1% on HSFU compared with the raw EEG signal, respectively. Experimental results demonstrate that the intra-artifacts bring out a masking negative impact on the deep learning-based sleep staging systems and the proposed SOBI-WT method has the best performance in diminishing this negative impact compared with other artifact elimination methods.
Collapse
|
27
|
Heremans ERM, Phan H, Borzée P, Buyse B, Testelmans D, De Vos M. From unsupervised to semi-supervised adversarial domain adaptation in EEG-based sleep staging. J Neural Eng 2022; 19. [PMID: 35508121 DOI: 10.1088/1741-2552/ac6ca8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 05/04/2022] [Indexed: 10/18/2022]
Abstract
OBJECTIVE The recent breakthrough of wearable sleep monitoring devices results in large amounts of sleep data. However, as limited labels are available, interpreting these data requires automated sleep stage classification methods with a small need for labeled training data. Transfer learning and domain adaptation offer possible solutions by enabling models to learn on a source dataset and adapt to a target dataset. APPROACH In this paper, we investigate adversarial domain adaptation applied to real use cases with wearable sleep datasets acquired from diseased patient populations. Different practical aspects of the adversarial domain adaptation framework \hl{are examined}, including the added value of (pseudo-)labels from the target dataset and the influence of domain mismatch between the source and target data. The method is also implemented for personalization to specific patients. MAIN RESULTS The results show that adversarial domain adaptation is effective in the application of sleep staging on wearable data. When compared to a model applied on a target dataset without any adaptation, the domain adaptation method in its simplest form achieves relative gains of 7%-27% in accuracy. The performance on the target domain is further boosted by adding pseudo-labels and real target domain labels when available, and by choosing an appropriate source dataset. Furthermore, unsupervised adversarial domain adaptation can also personalize a model, improving the performance by 1%-2% compared to a non-personal model. SIGNIFICANCE In conclusion, adversarial domain adaptation provides a flexible framework for semi-supervised and unsupervised transfer learning. This is particularly useful in sleep staging and other wearable EEG applications.
Collapse
Affiliation(s)
- Elisabeth Roxane Marie Heremans
- Department of Electrical Engineering, KU Leuven Science Engineering and Technology Group, Kasteelpark Arenberg 10, Leuven, 3001, BELGIUM
| | - Huy Phan
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Rd, Bethnal Green, London, E1 4NS, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Pascal Borzée
- Department of Pneumology, KU Leuven University Hospitals Leuven, Herestraat 49, Leuven, 3000, BELGIUM
| | - Bertien Buyse
- Department of Pneumology, KU Leuven University Hospitals Leuven, Herestraat 49, Leuven, Flanders, 3000, BELGIUM
| | - Dries Testelmans
- Department of Pneumology, KU Leuven University Hospitals Leuven, Herestraat 49, Leuven, 3000, BELGIUM
| | - Maarten De Vos
- Department of Electrical Engineering, KU Leuven Science Engineering and Technology Group, Kasteelpark Arenberg 10, Leuven, 3000, BELGIUM
| |
Collapse
|
28
|
Comparison of Time-Frequency Analyzes for a Sleep Staging Application with CNN. JOURNAL OF BIOMIMETICS BIOMATERIALS AND BIOMEDICAL ENGINEERING 2022. [DOI: 10.4028/p-2j5c10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Sleep staging is the process of acquiring biological signals during sleep and marking them according to the stages of sleep. The procedure is performed by an experienced physician and takes more time. When this process is automated, the processing load will be reduced and the time required to identify disease will also be reduced. In this paper, 8 different transform methods for automatic sleep-staging based on convolutional neural networks (CNNs) were compared to classify sleep stages using single-channel electroencephalogram (EEG) signals. Five different labels were used to stage the sleep. These are Wake (W), Non Rapid Eye Movement (NonREM)-1 (N1), NonREM-2 (N2), NonREM-3 (N3), and REM (R). The classifications were done end-to-end without any hand-crafted features, ie without requiring any feature engineering. Time-Frequency components obtained by Short Time Fourier Transform, Discrete Wavelet Transform, Discrete Cosine Transform, Hilbert-Huang Transform, Discrete Gabor Transform, Fast Walsh-Hadamard Transform, Choi-Williams Distribution, and Wigner-Willie Distribution were classified with a supervised deep convolutional neural network to perform sleep staging. The discrete Cosine Transform-CNN method (DCT-CNN) showed the highest performance among the methods suggested in this paper with an F1 score of 89% and a value of 0.86 kappa. The findings of this study revealed that the transformation techniques utilized for the most accurate representation of input data are far superior to traditional approaches based on manual feature extraction, which acquires time, frequency, or nonlinear characteristics. The results of this article are expected to be useful to researchers in the development of low-cost, and easily portable devices.
Collapse
|
29
|
Phan H, Mikkelsen K. Automatic sleep staging of EEG signals: recent development, challenges, and future directions. Physiol Meas 2022; 43. [PMID: 35320788 DOI: 10.1088/1361-6579/ac6049] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 03/23/2022] [Indexed: 11/11/2022]
Abstract
Modern deep learning holds a great potential to transform clinical practice on human sleep. Teaching a machine to carry out routine tasks would be a tremendous reduction in workload for clinicians. Sleep staging, a fundamental step in sleep practice, is a suitable task for this and will be the focus in this article. Recently, automatic sleep staging systems have been trained to mimic manual scoring, leading to similar performance to human sleep experts, at least on scoring of healthy subjects. Despite tremendous progress, we have not seen automatic sleep scoring adopted widely in clinical environments. This review aims to give a shared view of the authors on the most recent state-of-the-art development in automatic sleep staging, the challenges that still need to be addressed, and the future directions for automatic sleep scoring to achieve clinical value.
Collapse
Affiliation(s)
- Huy Phan
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Rd, London, E1 4NS, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Kaare Mikkelsen
- Department of Electrical and Computer Engineering, Aarhus Universitet, Finlandsgade 22, Aarhus, 8000, DENMARK
| |
Collapse
|
30
|
Zhang C, Yu W, Li Y, Sun H, Zhang Y, De Vos M. CMS2-net: Semi-supervised Sleep Staging for Diverse Obstructive Sleep Apnea Severity. IEEE J Biomed Health Inform 2022; 26:3447-3457. [PMID: 35255000 DOI: 10.1109/jbhi.2022.3156585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Although the development of computer-aided algorithms for sleep staging is integrated into automatic detection of sleep disorders, most supervised deep learning based models might suffer from insufficient labeled data. While the adoption of semi-supervised learning (SSL) can mitigate the issue, the SSL models are still limited to the lack of discriminative feature extraction for diverse obstructive sleep apnea (OSA) severity. This model deterioration might be exacerbated during the domain adaptation. Such exploration on the alleviation of domain-shift of SSL model between different OSA conditions has attracted more and more attentions from the clinic. In this work, a co-attention meta sleep staging network (CMS2-net) is proposed to simultaneously deal with two issues: the inter-class disparity problem and the intra-class selection problem. Within CMS2-net, a co-attention module and a triple-classifier are designed to explicitly refine the coarse feature representations by identifying the class boundary inconsistency. Moreover, the mutual information with meta contrastive variance is introduced to supervise the gradient stream from a multiscale view. The performance of the proposed framework is demonstrated on both public and local datasets. Furthermore, our approach achieves the state-of-the-art SSL results on both datasets.
Collapse
|
31
|
Adversarial learning for semi-supervised pediatric sleep staging with single-EEG channel. Methods 2022; 204:84-91. [DOI: 10.1016/j.ymeth.2022.03.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 03/12/2022] [Accepted: 03/21/2022] [Indexed: 11/18/2022] Open
|
32
|
Robust learning from corrupted EEG with dynamic spatial filtering. Neuroimage 2022; 251:118994. [PMID: 35181552 DOI: 10.1016/j.neuroimage.2022.118994] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 02/03/2022] [Accepted: 02/11/2022] [Indexed: 11/20/2022] Open
Abstract
Building machine learning models using EEG recorded outside of the laboratory setting requires methods robust to noisy data and randomly missing channels. This need is particularly great when working with sparse EEG montages (1-6 channels), often encountered in consumer-grade or mobile EEG devices. Neither classical machine learning models nor deep neural networks trained end-to-end on EEG are typically designed or tested for robustness to corruption, and especially to randomly missing channels. While some studies have proposed strategies for using data with missing channels, these approaches are not practical when sparse montages are used and computing power is limited (e.g., wearables, cell phones). To tackle this problem, we propose dynamic spatial filtering (DSF), a multi-head attention module that can be plugged in before the first layer of a neural network to handle missing EEG channels by learning to focus on good channels and to ignore bad ones. We tested DSF on public EEG data encompassing ∼4,000 recordings with simulated channel corruption and on a private dataset of ∼100 at-home recordings of mobile EEG with natural corruption. Our proposed approach achieves the same performance as baseline models when no noise is applied, but outperforms baselines by as much as 29.4% accuracy when significant channel corruption is present. Moreover, DSF outputs are interpretable, making it possible to monitor the effective channel importance in real-time. This approach has the potential to enable the analysis of EEG in challenging settings where channel corruption hampers the reading of brain signals.
Collapse
|
33
|
Phan H, Mikkelsen K, Chen OY, Koch P, Mertins A, De Vos M. SleepTransformer: Automatic Sleep Staging with Interpretability and Uncertainty Quantification. IEEE Trans Biomed Eng 2022; 69:2456-2467. [PMID: 35100107 DOI: 10.1109/tbme.2022.3147187] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND Black-box skepticism is one of the main hindrances impeding deep-learning-based automatic sleep scoring from being used in clinical environments. METHODS Towards interpretability, this work proposes a sequence-to-sequence sleep staging model, namely SleepTransformer. It is based on the transformer backbone and offers interpretability of the models decisions at both the epoch and sequence level. We further propose a simple yet efficient method to quantify uncertainty in the models decisions. The method, which is based on entropy, can serve as a metric for deferring low-confidence epochs to a human expert for further inspection. RESULTS Making sense of the transformers self-attention scores for interpretability, at the epoch level, the attention scores are encoded as a heat map to highlight sleep-relevant features captured from the input EEG signal. At the sequence level, the attention scores are visualized as the influence of different neighboring epochs in an input sequence (i.e. the context) to recognition of a target epoch, mimicking the way manual scoring is done by human experts. CONCLUSION Additionally, we demonstrate that SleepTransformer performs on par with existing methods on two databases of different sizes. SIGNIFICANCE Equipped with interpretability and the ability of uncertainty quantification, SleepTransformer holds promise for being integrated into clinical settings.
Collapse
|
34
|
Hong J, Tran HH, Jung J, Jang H, Lee D, Yoon IY, Hong JK, Kim JW. End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices. Nat Sci Sleep 2022; 14:1187-1201. [PMID: 35783665 PMCID: PMC9241996 DOI: 10.2147/nss.s361270] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/03/2022] [Indexed: 02/04/2023] Open
Abstract
PURPOSE Nocturnal sounds contain numerous information and are easily obtainable by a non-contact manner. Sleep staging using nocturnal sounds recorded from common mobile devices may allow daily at-home sleep tracking. The objective of this study is to introduce an end-to-end (sound-to-sleep stages) deep learning model for sound-based sleep staging designed to work with audio from microphone chips, which are essential in mobile devices such as modern smartphones. PATIENTS AND METHODS Two different audio datasets were used: audio data routinely recorded by a solitary microphone chip during polysomnography (PSG dataset, N=1154) and audio data recorded by a smartphone (smartphone dataset, N=327). The audio was converted into Mel spectrogram to detect latent temporal frequency patterns of breathing and body movement from ambient noise. The proposed neural network model learns to first extract features from each 30-second epoch and then analyze inter-epoch relationships of extracted features to finally classify the epochs into sleep stages. RESULTS Our model achieved 70% epoch-by-epoch agreement for 4-class (wake, light, deep, REM) sleep stage classification and robust performance across various signal-to-noise conditions. The model performance was not considerably affected by sleep apnea or periodic limb movement. External validation with smartphone dataset also showed 68% epoch-by-epoch agreement. CONCLUSION The proposed end-to-end deep learning model shows potential of low-quality sounds recorded from microphone chips to be utilized for sleep staging. Future study using nocturnal sounds recorded from mobile devices at home environment may further confirm the use of mobile device recording as an at-home sleep tracker.
Collapse
Affiliation(s)
- Joonki Hong
- Asleep Inc., Seoul, Korea.,Korea Advanced Institute of Science and Technology, Daejeon, Korea
| | | | | | | | | | - In-Young Yoon
- Department of Psychiatry, Seoul National University Bundang Hospital, Seongnam, Korea.,Seoul National University College of Medicine, Seoul, Korea
| | - Jung Kyung Hong
- Department of Psychiatry, Seoul National University Bundang Hospital, Seongnam, Korea.,Seoul National University College of Medicine, Seoul, Korea
| | - Jeong-Whun Kim
- Seoul National University College of Medicine, Seoul, Korea.,Department of Otorhinolaryngology, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
35
|
Zhang H, Wang X, Li H, Mehendale S, Guan Y. Auto-annotating sleep stages based on polysomnographic data. PATTERNS 2022; 3:100371. [PMID: 35079710 PMCID: PMC8767308 DOI: 10.1016/j.patter.2021.100371] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 07/15/2021] [Accepted: 09/28/2021] [Indexed: 11/25/2022]
Abstract
Sleep disorders affect the quality of life, and the clinical diagnosis of sleep disorders is a time-consuming and tedious process requiring recording and annotating polysomnographic records. In this work, we developed an auto-annotation algorithm based on polysomnographic records and a deep learning architecture that predicts sleep stages at the millisecond level. The model improves the efficiency of the polysomnographic record annotation process by automatically annotating each record within 3.8 s of computation time and with high accuracy. Disease-related sleep stages, such as arousal and apnea, can also be identified by this model, which further expands the physiological insights that the model can potentially provide. Finally, we explored the applicability of the model to data collected from a different modality to demonstrate the robustness of the model. Polysomnography enables accurate annotation of sleeping stages by machine learning Apnea/arousal can be more accurately detected by full polysomnography than EEG U-net achieved excellent performance in sequence-to-sequence prediction Our deep learning model achieves human-level accuracy in sleep status annotations
Sleep quality is one of the top public health concerns. Disturbance during sleep will affect peoples' daily executive functions. In addition, some pathological sleeping conditions, such as arousal and apnea, are closely associated with severe health conditions such as cardiovascular diseases. Traditional sleeping surveillance requires laborious human effort while maintaining a limited reproducibility. In this study, we present a fast automatic sleep annotation deep learning model with excellent performances. Our model can annotate sleeping stages as well as sleeping arousal/apnea at the same time, which provides insight for clinical diagnosis of sleeping patients.
Collapse
|
36
|
Autthasan P, Chaisaen R, Sudhawiyangkul T, Rangpong P, Kiatthaveephong S, Dilokthanakul N, Bhakdisongkhram G, Phan H, Guan C, Wilaiprasitporn T. MIN2Net: End-to-End Multi-Task Learning for Subject-Independent Motor Imagery EEG Classification. IEEE Trans Biomed Eng 2021; 69:2105-2118. [PMID: 34932469 DOI: 10.1109/tbme.2021.3137184] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Advances in the motor imagery (MI)-based brain-computer interfaces (BCIs) allow control of several applications by decoding neurophysiological phenomena, which are usually recorded by electroencephalography (EEG) using a non-invasive technique. Despite significant advances in MI-based BCI, EEG rhythms are specific to a subject and various changes over time. These issues point to significant challenges to enhance the classification performance, especially in a subject-independent manner. METHODS To overcome these challenges, we propose MIN2Net, a novel end-to-end multi-task learning to tackle this task. We integrate deep metric learning into a multi-task autoencoder to learn a compact and discriminative latent representation from EEG and perform classification simultaneously. RESULTS This approach reduces the complexity in pre-processing, results in significant performance improvement on EEG classification. Experimental results in a subject-independent manner show that MIN2Net outperforms the state-of-the-art techniques, achieving an F1-score improvement of 6.72 %, and 2.23 % on the SMR-BCI, and OpenBMI datasets, respectively. CONCLUSION We demonstrate that MIN2Net improves discriminative information in the latent representation. SIGNIFICANCE This study indicates the possibility and practicality of using this model to develop MI-based BCI applications for new users without calibration.
Collapse
|
37
|
Fiorillo L, Favaro P, Faraci FD. DeepSleepNet-Lite: A Simplified Automatic Sleep Stage Scoring Model With Uncertainty Estimates. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2076-2085. [PMID: 34648450 DOI: 10.1109/tnsre.2021.3117970] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Deep learning is widely used in the most recent automatic sleep scoring algorithms. Its popularity stems from its excellent performance and from its ability to process raw signals and to learn feature directly from the data. Most of the existing scoring algorithms exploit very computationally demanding architectures, due to their high number of training parameters, and process lengthy time sequences in input (up to 12 minutes). Only few of these architectures provide an estimate of the model uncertainty. In this study we propose DeepSleepNet-Lite, a simplified and lightweight scoring architecture, processing only 90-seconds EEG input sequences. We exploit, for the first time in sleep scoring, the Monte Carlo dropout technique to enhance the performance of the architecture and to also detect the uncertain instances. The evaluation is performed on a single-channel EEG Fpz-Cz from the open source Sleep-EDF expanded database. DeepSleepNet-Lite achieves slightly lower performance, if not on par, compared to the existing state-of-the-art architectures, in overall accuracy, macro F1-score and Cohen's kappa (on Sleep-EDF v1-2013 ±30mins: 84.0%, 78.0%, 0.78; on Sleep-EDF v2-2018 ±30mins: 80.3%, 75.2%, 0.73). Monte Carlo dropout enables the estimate of the uncertain predictions. By rejecting the uncertain instances, the model achieves higher performance on both versions of the database (on Sleep-EDF v1-2013 ±30mins: 86.1.0%, 79.6%, 0.81; on Sleep-EDF v2-2018 ±30mins: 82.3%, 76.7%, 0.76). Our lighter sleep scoring approach paves the way to the application of scoring algorithms for sleep analysis in real-time.
Collapse
|
38
|
Guillot A, Thorey V. RobustSleepNet: Transfer Learning for Automated Sleep Staging at Scale. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1441-1451. [PMID: 34288872 DOI: 10.1109/tnsre.2021.3098968] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Sleep disorder diagnosis relies on the analysis of polysomnography (PSG) records. As a preliminary step of this examination, sleep stages are systematically determined. In practice, sleep stage classification relies on the visual inspection of 30-second epochs of polysomnography signals. Numerous automatic approaches have been developed to replace this tedious and expensive task. Although these methods demonstrated better performance than human sleep experts on specific datasets, they remain largely unused in sleep clinics. The main reason is that each sleep clinic uses a specific PSG montage that most automatic approaches cannot handle out-of-the-box. Moreover, even when the PSG montage is compatible, publications have shown that automatic approaches perform poorly on unseen data with different demographics. To address these issues, we introduce RobustSleepNet, a deep learning model for automatic sleep stage classification able to handle arbitrary PSG montages. We trained and evaluated this model in a leave-one-out-dataset fashion on a large corpus of 8 heterogeneous sleep staging datasets to make it robust to demographic changes. When evaluated on an unseen dataset, RobustSleepNet reaches 97% of the F1 of a model explicitly trained on this dataset. Hence, RobustSleepNet unlocks the possibility to perform high-quality out-of-the-box automatic sleep staging with any clinical setup. We further show that finetuning RobustSleepNet, using a part of the unseen dataset, increases the F1 by 2% when compared to a model trained specifically for this dataset. Therefore, finetuning might be used to reach a state-of-the-art level of performance on a specific population.
Collapse
|
39
|
Perslev M, Darkner S, Kempfner L, Nikolic M, Jennum PJ, Igel C. U-Sleep: resilient high-frequency sleep staging. NPJ Digit Med 2021; 4:72. [PMID: 33859353 PMCID: PMC8050216 DOI: 10.1038/s41746-021-00440-5] [Citation(s) in RCA: 91] [Impact Index Per Article: 30.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 03/10/2021] [Indexed: 02/02/2023] Open
Abstract
Sleep disorders affect a large portion of the global population and are strong predictors of morbidity and all-cause mortality. Sleep staging segments a period of sleep into a sequence of phases providing the basis for most clinical decisions in sleep medicine. Manual sleep staging is difficult and time-consuming as experts must evaluate hours of polysomnography (PSG) recordings with electroencephalography (EEG) and electrooculography (EOG) data for each patient. Here, we present U-Sleep, a publicly available, ready-to-use deep-learning-based system for automated sleep staging ( sleep.ai.ku.dk ). U-Sleep is a fully convolutional neural network, which was trained and evaluated on PSG recordings from 15,660 participants of 16 clinical studies. It provides accurate segmentations across a wide range of patient cohorts and PSG protocols not considered when building the system. U-Sleep works for arbitrary combinations of typical EEG and EOG channels, and its special deep learning architecture can label sleep stages at shorter intervals than the typical 30 s periods used during training. We show that these labels can provide additional diagnostic information and lead to new ways of analyzing sleep. U-Sleep performs on par with state-of-the-art automatic sleep staging systems on multiple clinical datasets, even if the other systems were built specifically for the particular data. A comparison with consensus-scores from a previously unseen clinic shows that U-Sleep performs as accurately as the best of the human experts. U-Sleep can support the sleep staging workflow of medical experts, which decreases healthcare costs, and can provide highly accurate segmentations when human expertize is lacking.
Collapse
Affiliation(s)
- Mathias Perslev
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Sune Darkner
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Lykke Kempfner
- Danish Center for Sleep Medicine, Rigshospitalet, Copenhagen, Denmark
| | - Miki Nikolic
- Danish Center for Sleep Medicine, Rigshospitalet, Copenhagen, Denmark
| | | | - Christian Igel
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|