1
|
Huang X, Shirahama K, Irshad MT, Nisar MA, Piet A, Grzegorzek M. Sleep Stage Classification in Children Using Self-Attention and Gaussian Noise Data Augmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:3446. [PMID: 37050506 PMCID: PMC10098613 DOI: 10.3390/s23073446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/20/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
The analysis of sleep stages for children plays an important role in early diagnosis and treatment. This paper introduces our sleep stage classification method addressing the following two challenges: the first is the data imbalance problem, i.e., the highly skewed class distribution with underrepresented minority classes. For this, a Gaussian Noise Data Augmentation (GNDA) algorithm was applied to polysomnography recordings to seek the balance of data sizes for different sleep stages. The second challenge is the difficulty in identifying a minority class of sleep stages, given their short sleep duration and similarities to other stages in terms of EEG characteristics. To overcome this, we developed a DeConvolution- and Self-Attention-based Model (DCSAM) which can inverse the feature map of a hidden layer to the input space to extract local features and extract the correlations between all possible pairs of features to distinguish sleep stages. The results on our dataset show that DCSAM based on GNDA obtains an accuracy of 90.26% and a macro F1-score of 86.51% which are higher than those of our previous method. We also tested DCSAM on a well-known public dataset-Sleep-EDFX-to prove whether it is applicable to sleep data from adults. It achieves a comparable performance to state-of-the-art methods, especially accuracies of 91.77%, 92.54%, 94.73%, and 95.30% for six-stage, five-stage, four-stage, and three-stage classification, respectively. These results imply that our DCSAM based on GNDA has a great potential to offer performance improvements in various medical domains by considering the data imbalance problems and correlations among features in time series data.
Collapse
Affiliation(s)
- Xinyu Huang
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Kimiaki Shirahama
- Department of Informatics, Kindai University, 3-4-1 Kowakae, Higashiosaka City 577-8502, Osaka, Japan
| | - Muhammad Tausif Irshad
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
- Department of IT, University of the Punjab, Lahore 54000, Pakistan
| | | | - Artur Piet
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
- Department of Knowledge Engineering, University of Economics, Bogucicka 3, 40287 Katowice, Poland
| |
Collapse
|
2
|
Cheng C, Zhou Y, You B, Liu Y, Fei G, Yang L, Dai Y. Multiview Feature Fusion Representation for Interictal Epileptiform Spikes Detection. Int J Neural Syst 2022; 32:2250014. [PMID: 35272587 DOI: 10.1142/s0129065722500149] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Interictal epileptiform spikes (IES) of scalp electroencephalogram (EEG) signals have a strong relation with the epileptogenic region. Since IES are highly unlikely to be detected in scalp EEG signals, the primary diagnosis depends heavily on the visual evaluation of IES. However, visual inspection of EEG signals, the standard IES detection procedure is time-consuming, highly subjective, and error-prone. Furthermore, the highly complex, nonlinear, and nonstationary characteristics of EEG signals lead to the incomplete representation of EEG signals in existing computer-aided methods and consequently unsatisfactory detection performance. Therefore, a novel multiview feature fusion representation (MVFFR) method was developed and combined with a robustness classifier to detect EEG signals with/without IES. MVFFR comprises two steps: First, temporal, frequency, temporal-frequency, spatial, and nonlinear domain features are transformed by the IES to express the latent information effectively. Second, the unsupervised infinite feature-selection method determines the most distinct feature fusion representations. Experimental results using a balanced dataset of six patients showed that MVFFR achieved the optimal detection performance (accuracy: 89.27%, sensitivity: 89.01%, specificity: 89.54%, and precision: 89.82%) compared with other feature ranking methods, and the MVFFR-related method were complementary and indispensable. Additionally, in an independent test, MVFFR maintained excellent generalization capacity with a false detection rate per minute of 0.15 on the unbalanced dataset of one patient.
Collapse
Affiliation(s)
- Chenchen Cheng
- School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin 150080, P. R. China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, P. R. China.,Heilongjiang Provincial Key Laboratory, of Complex Intelligent System and Integration, Harbin University of Science and Technology, Harbin 150080, P. R. China
| | - Yuanfeng Zhou
- Department of Neurology, Children's Hospital of Fudan University, Shanghai 200000, P. R. China
| | - Bo You
- School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin 150080, P. R. China.,Heilongjiang Provincial Key Laboratory, of Complex Intelligent System and Integration, Harbin University of Science and Technology, Harbin 150080, P. R. China.,School of Automation, Harbin University of Science and Technology, Harbin 150080, P. R. China
| | - Yan Liu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, P. R. China.,Jinan Guoke Medical Engineering Technology Development Co., Ltd, Jinan 250000, P. R. China
| | - Gao Fei
- Department of Radiology, Shandong Provincial Hospital, Cheeloo College of Medicine, Shandong University Jinan, P. R. China
| | - Liling Yang
- Department of Neurology, Shandong Provincial Hospital, Affiliated to Shandong First Medical University, Jinan 250021, P. R. China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, P. R. China.,Jinan Guoke Medical Engineering Technology Development Co., Ltd, Jinan 250000, P. R. China
| |
Collapse
|
3
|
Huang X, Shirahama K, Li F, Grzegorzek M. Sleep stage classification for child patients using DeConvolutional Neural Network. Artif Intell Med 2020; 110:101981. [PMID: 33250147 DOI: 10.1016/j.artmed.2020.101981] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 10/08/2020] [Accepted: 10/27/2020] [Indexed: 02/08/2023]
Abstract
Studies from the literature show that the prevalence of sleep disorder in children is far higher than that in adults. Although much research effort has been made on sleep stage classification for adults, children have significantly different characteristics of sleep stages. Therefore, there is an urgent need for sleep stage classification targeting children in particular. Our method focuses on two issues: The first is timestamp-based segmentation (TSS) to deal with the fine-grained annotation of sleep stage labels for each timestamp. Compared to this, popular sliding window approaches unnecessarily aggregate such labels into coarse-grained ones. We utilize DeConvolutional Neural Network (DCNN) that inversely maps features of a hidden layer back to the input space to predict the sleep stage label at each timestamp. Thus, our DCNN can yield better classification performances by considering labels at numerous timestamps. The second issue is the necessity of multiple channels. Different clinical signs, symptoms or other auxiliary examinations could be represented by different Polysomnography (PSG) recordings, so all of them should be analyzed comprehensively. We therefor exploit multivariate time-series of PSG recordings, including 6 electroencephalograms (EEGs) channels, 2 electrooculograms (EOGs) channels (left and right), 1 electromyogram (chin EMG) channel and two leg electromyogram channels. Our DCNN-based method is tested on our SDCP dataset collected from child patients aged from 5 to 10 years old. The results show that our method yields the overall classification accuracy of 84.27% and macro F1-score of 72.51% which are higher than those of existing sliding window-based methods. One of the biggest advantages of our DCNN-based method is that it processes raw PSG recordings and internally extracts features useful for accurate sleep stage classification. We examine whether this is applicable for sleep data of adult patients by testing our method on a well-known public dataset Sleep-EDFX. Our method achieves the average overall accuracy of 90.89% which is comparable to those of state-of-the-art methods without using any hand-crafted features. This result indicates the great potential of our method because it can be generally used for timestamp-level classification on multivariate time-series in various medical fields. Additionally, we provide source codes so that researchers can reproduce the results in this paper and extend our method.
Collapse
Affiliation(s)
- Xinyu Huang
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, Lübeck 23538, Germany.
| | - Kimiaki Shirahama
- Department of Informatics, Kindai University, 3-4-1 Kowakae, Higashiosaka City, Osaka 577-8502, Japan.
| | - Frédéric Li
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, Lübeck 23538, Germany.
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, Lübeck 23538, Germany.
| |
Collapse
|