1
|
Jiang C, Xie W, Zheng J, Yan B, Luo J, Zhang J. MLS-Net: An Automatic Sleep Stage Classifier Utilizing Multimodal Physiological Signals in Mice. BIOSENSORS 2024; 14:406. [PMID: 39194635 DOI: 10.3390/bios14080406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 08/20/2024] [Accepted: 08/20/2024] [Indexed: 08/29/2024]
Abstract
Over the past decades, feature-based statistical machine learning and deep neural networks have been extensively utilized for automatic sleep stage classification (ASSC). Feature-based approaches offer clear insights into sleep characteristics and require low computational power but often fail to capture the spatial-temporal context of the data. In contrast, deep neural networks can process raw sleep signals directly and deliver superior performance. However, their overfitting, inconsistent accuracy, and computational cost were the primary drawbacks that limited their end-user acceptance. To address these challenges, we developed a novel neural network model, MLS-Net, which integrates the strengths of neural networks and feature extraction for automated sleep staging in mice. MLS-Net leverages temporal and spectral features from multimodal signals, such as EEG, EMG, and eye movements (EMs), as inputs and incorporates a bidirectional Long Short-Term Memory (bi-LSTM) to effectively capture the spatial-temporal nonlinear characteristics inherent in sleep signals. Our studies demonstrate that MLS-Net achieves an overall classification accuracy of 90.4% and REM state precision of 91.1%, sensitivity of 84.7%, and an F1-Score of 87.5% in mice, outperforming other neural network and feature-based algorithms in our multimodal dataset.
Collapse
Affiliation(s)
- Chengyong Jiang
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, Institutes of Brain Science, Institute for Medical and Engineering Innovation, Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai 200032, China
| | - Wenbin Xie
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, Institutes of Brain Science, Institute for Medical and Engineering Innovation, Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai 200032, China
| | - Jiadong Zheng
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, Institutes of Brain Science, Institute for Medical and Engineering Innovation, Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai 200032, China
| | - Biao Yan
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, Institutes of Brain Science, Institute for Medical and Engineering Innovation, Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai 200032, China
| | - Junwen Luo
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, Institutes of Brain Science, Institute for Medical and Engineering Innovation, Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai 200032, China
| | - Jiayi Zhang
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, Institutes of Brain Science, Institute for Medical and Engineering Innovation, Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai 200032, China
| |
Collapse
|
2
|
Khan MA. A Comparative Study on Imputation Techniques: Introducing a Transformer Model for Robust and Efficient Handling of Missing EEG Amplitude Data. Bioengineering (Basel) 2024; 11:740. [PMID: 39199698 PMCID: PMC11351899 DOI: 10.3390/bioengineering11080740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 07/04/2024] [Accepted: 07/16/2024] [Indexed: 09/01/2024] Open
Abstract
In clinical datasets, missing data often occur due to various reasons including non-response, data corruption, and errors in data collection or processing. Such missing values can lead to biased statistical analyses, reduced statistical power, and potentially misleading findings, making effective imputation critical. Traditional imputation methods, such as Zero Imputation, Mean Imputation, and k-Nearest Neighbors (KNN) Imputation, attempt to address these gaps. However, these methods often fall short of accurately capturing the underlying data complexity, leading to oversimplified assumptions and errors in prediction. This study introduces a novel Imputation model employing transformer-based architectures to address these challenges. Notably, the model distinguishes between complete EEG signal amplitude data and incomplete data in two datasets: PhysioNet and CHB-MIT. By training exclusively on complete amplitude data, the TabTransformer accurately learns and predicts missing values, capturing intricate patterns and relationships inherent in EEG amplitude data. Evaluation using various error metrics and R2 score demonstrates significant enhancements over traditional methods such as Zero, Mean, and KNN imputation. The Proposed Model achieves impressive R2 scores of 0.993 for PhysioNet and 0.97 for CHB-MIT, highlighting its efficacy in handling complex clinical data patterns and improving dataset integrity. This underscores the transformative potential of transformer models in advancing the utility and reliability of clinical datasets.
Collapse
Affiliation(s)
- Murad Ali Khan
- Department of Computer Engineering, Jeju National University, Jeju 63243, Jeju-do, Republic of Korea
| |
Collapse
|
3
|
Zhang H, Chen J, Liao B, Wu FX, Bi XA. Deep Canonical Correlation Fusion Algorithm Based on Denoising Autoencoder for ASD Diagnosis and Pathogenic Brain Region Identification. Interdiscip Sci 2024; 16:455-468. [PMID: 38573456 DOI: 10.1007/s12539-024-00625-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 02/22/2024] [Accepted: 02/25/2024] [Indexed: 04/05/2024]
Abstract
Autism Spectrum Disorder (ASD) is defined as a neurodevelopmental condition distinguished by unconventional neural activities. Early intervention is key to managing the progress of ASD, and current research primarily focuses on the use of structural magnetic resonance imaging (sMRI) or resting-state functional magnetic resonance imaging (rs-fMRI) for diagnosis. Moreover, the use of autoencoders for disease classification has not been sufficiently explored. In this study, we introduce a new framework based on autoencoder, the Deep Canonical Correlation Fusion algorithm based on Denoising Autoencoder (DCCF-DAE), which proves to be effective in handling high-dimensional data. This framework involves efficient feature extraction from different types of data with an advanced autoencoder, followed by the fusion of these features through the DCCF model. Then we utilize the fused features for disease classification. DCCF integrates functional and structural data to help accurately diagnose ASD and identify critical Regions of Interest (ROIs) in disease mechanisms. We compare the proposed framework with other methods by the Autism Brain Imaging Data Exchange (ABIDE) database and the results demonstrate its outstanding performance in ASD diagnosis. The superiority of DCCF-DAE highlights its potential as a crucial tool for early ASD diagnosis and monitoring.
Collapse
Affiliation(s)
- Huilian Zhang
- Key Laboratory of Data Science and Intelligence Education, Ministry of Education, Hainan Normal University, Haikou, 571126, China
- College of Mathematics and Statistics, Hainan Normal University, Haikou, 571126, China
| | - Jie Chen
- Key Laboratory of Data Science and Intelligence Education, Ministry of Education, Hainan Normal University, Haikou, 571126, China
- College of Mathematics and Statistics, Hainan Normal University, Haikou, 571126, China
| | - Bo Liao
- Key Laboratory of Data Science and Intelligence Education, Ministry of Education, Hainan Normal University, Haikou, 571126, China
- College of Mathematics and Statistics, Hainan Normal University, Haikou, 571126, China
| | - Fang-Xiang Wu
- Division of Biomedical Engineering, University of Saskatchewan, Saskatoon, S7N5A9, Canada
| | - Xia-An Bi
- Key Laboratory of Data Science and Intelligence Education, Ministry of Education, Hainan Normal University, Haikou, 571126, China.
- College of Mathematics and Statistics, Hainan Normal University, Haikou, 571126, China.
- College of Information Science and Engineering, Hunan Normal University, Changsha, Hunan, 410081, China.
| |
Collapse
|
4
|
Zhang D, Peng Z, Sun S, van Pul C, Shan C, Dudink J, Andriessen P, Aarts RM, Long X. Characterising the motion and cardiorespiratory interaction of preterm infants can improve the classification of their sleep state. Acta Paediatr 2024; 113:1236-1245. [PMID: 38501583 DOI: 10.1111/apa.17211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 02/18/2024] [Accepted: 03/11/2024] [Indexed: 03/20/2024]
Abstract
AIM This study aimed to classify quiet sleep, active sleep and wake states in preterm infants by analysing cardiorespiratory signals obtained from routine patient monitors. METHODS We studied eight preterm infants, with an average postmenstrual age of 32.3 ± 2.4 weeks, in a neonatal intensive care unit in the Netherlands. Electrocardiography and chest impedance respiratory signals were recorded. After filtering and R-peak detection, cardiorespiratory features and motion and cardiorespiratory interaction features were extracted, based on previous research. An extremely randomised trees algorithm was used for classification and performance was evaluated using leave-one-patient-out cross-validation and Cohen's kappa coefficient. RESULTS A sleep expert annotated 4731 30-second epochs (39.4 h) and active sleep, quiet sleep and wake accounted for 73.3%, 12.6% and 14.1% respectively. Using all features, and the extremely randomised trees algorithm, the binary discrimination between active and quiet sleep was better than between other states. Incorporating motion and cardiorespiratory interaction features improved the classification of all sleep states (kappa 0.38 ± 0.09) than analyses without these features (kappa 0.31 ± 0.11). CONCLUSION Cardiorespiratory interactions contributed to detecting quiet sleep and motion features contributed to detecting wake states. This combination improved the automated classifications of sleep states.
Collapse
Affiliation(s)
- Dandan Zhang
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Zheng Peng
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
- Department of Applied Physics and Science Education, Eindhoven University of Technology, Eindhoven, The Netherlands
- Department of Clinical Physics, Máxima Medical Center, Veldhoven, The Netherlands
| | - Shaoxiong Sun
- Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
| | - Carola van Pul
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
- Department of Applied Physics and Science Education, Eindhoven University of Technology, Eindhoven, The Netherlands
- Department of Clinical Physics, Máxima Medical Center, Veldhoven, The Netherlands
| | - Caifeng Shan
- College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao, China
- School of Intelligence Science and Technology, Nanjing University, Nanjing, China
| | - Jeroen Dudink
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Peter Andriessen
- Department of Applied Physics and Science Education, Eindhoven University of Technology, Eindhoven, The Netherlands
- Department of Neonatology, Máxima Medical Center, Veldhoven, The Netherlands
| | - Ronald M Aarts
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Xi Long
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
5
|
Huang D, Yu D, Zeng Y, Song X, Pan L, He J, Ren L, Yang J, Lu H, Wang W. Generalized Camera-Based Infant Sleep-Wake Monitoring in NICUs: A Multi-Center Clinical Trial. IEEE J Biomed Health Inform 2024; 28:3015-3028. [PMID: 38446652 DOI: 10.1109/jbhi.2024.3371687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
The infant sleep-wake behavior is an essential indicator of physiological and neurological system maturity, the circadian transition of which is important for evaluating the recovery of preterm infants from inadequate physiological function and cognitive disorders. Recently, camera-based infant sleep-wake monitoring has been investigated, but the challenges of generalization caused by variance in infants and clinical environments are not addressed for this application. In this paper, we conducted a multi-center clinical trial at four hospitals to improve the generalization of camera-based infant sleep-wake monitoring. Using the face videos of 64 term and 39 preterm infants recorded in NICUs, we proposed a novel sleep-wake classification strategy, called consistent deep representation constraint (CDRC), that forces the convolutional neural network (CNN) to make consistent predictions for the samples from different conditions but with the same label, to address the variances caused by infants and environments. The clinical validation shows that by using CDRC, all CNN backbones obtain over 85% accuracy, sensitivity, and specificity in both the cross-age and cross-environment experiments, improving the ones without CDRC by almost 15% in all metrics. This demonstrates that by improving the consistency of the deep representation of samples with the same state, we can significantly improve the generalization of infant sleep-wake classification.
Collapse
|
6
|
Abbasi SF, Abbas A, Ahmad I, Alshehri MS, Almakdi S, Ghadi YY, Ahmad J. Automatic neonatal sleep stage classification: A comparative study. Heliyon 2023; 9:e22195. [PMID: 38058619 PMCID: PMC10695968 DOI: 10.1016/j.heliyon.2023.e22195] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 09/21/2023] [Accepted: 11/06/2023] [Indexed: 12/08/2023] Open
Abstract
Sleep is an essential feature of living beings. For neonates, it is vital for their mental and physical development. Sleep stage cycling is an important parameter to assess neonatal brain and physical development. Therefore, it is crucial to administer newborn's sleep in the neonatal intensive care unit (NICU). Currently, Polysomnography (PSG) is used as a gold standard method for classifying neonatal sleep patterns, but it is expensive and requires a lot of human involvement. Over the last two decades, multiple researchers are working on automatic sleep stage classification algorithms using electroencephalography (EEG), electrocardiography (ECG), and video. In this study, we present a comprehensive review of existing algorithms for neonatal sleep, their limitations and future recommendations. Additionally, a brief comparison of the extracted features, classification algorithms and evaluation parameters is reported in the proposed study.
Collapse
Affiliation(s)
- Saadullah Farooq Abbasi
- Department of Electronic, Electrical and System Engineering, University of Birmingham, Birmingham, United Kingdom
| | - Awais Abbas
- Department of Electronic, Electrical and System Engineering, University of Birmingham, Birmingham, United Kingdom
| | - Iftikhar Ahmad
- James Watt School of Engineering, University of Glasgow, United Kingdom
| | - Mohammed S. Alshehri
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran, Saudi Arabia
| | - Sultan Almakdi
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran, Saudi Arabia
| | - Yazeed Yasin Ghadi
- Department of Computer Science, Al Ain University, Abu Dhabi P.O. Box 112612, United Arab Emirates
| | - Jawad Ahmad
- School of Computing, Engineering and the Built Environment, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| |
Collapse
|
7
|
Abbasi SF, Abbasi QH, Saeed F, Alghamdi NS. A convolutional neural network-based decision support system for neonatal quiet sleep detection. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:17018-17036. [PMID: 37920045 DOI: 10.3934/mbe.2023759] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Sleep plays an important role in neonatal brain and physical development, making its detection and characterization important for assessing early-stage development. In this study, we propose an automatic and computationally efficient algorithm to detect neonatal quiet sleep (QS) using a convolutional neural network (CNN). Our study used 38-hours of electroencephalography (EEG) recordings, collected from 19 neonates at Fudan Children's Hospital in Shanghai, China (Approval No. (2020) 22). To train and test the CNN, we extracted 12 prominent time and frequency domain features from 9 bipolar EEG channels. The CNN architecture comprised two convolutional layers with pooling and rectified linear unit (ReLU) activation. Additionally, a smoothing filter was applied to hold the sleep stage for 3 minutes. Through performance testing, our proposed method achieved impressive results, with 94.07% accuracy, 89.70% sensitivity, 94.40% specificity, 79.82% F1-score and a 0.74 kappa coefficient when compared to human expert annotations. A notable advantage of our approach is its computational efficiency, with the entire training and testing process requiring only 7.97 seconds. The proposed algorithm has been validated using leave one subject out (LOSO) validation, which demonstrates its consistent performance across a diverse range of neonates. Our findings highlight the potential of our algorithm for real-time neonatal sleep stage classification, offering a fast and cost-effective solution. This research opens avenues for further investigations in early-stage development monitoring and the assessment of neonatal health.
Collapse
Affiliation(s)
- Saadullah Farooq Abbasi
- Department of Biomedical Engineering, Riphah International University, Islamabad 44000, Pakistan
| | - Qammer Hussain Abbasi
- James Watt School of Engineering, University of Glasgow, Glasgow, G4 0PE, United Kingdom
| | - Faisal Saeed
- DAAI Research Group, Department of Computing and Data Science, School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7XG, UK
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
8
|
Grooby E, Sitaula C, Ahani S, Holsti L, Malhotra A, Dumont GA, Marzbanrad F. Neonatal Face and Facial Landmark Detection from Video Recordings. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083549 DOI: 10.1109/embc40787.2023.10340960] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
This paper explores automated face and facial landmark detection of neonates, which is an important first step in many video-based neonatal health applications, such as vital sign estimation, pain assessment, sleep-wake classification, and jaundice detection. Utilising three publicly available datasets of neonates in the clinical environment, 366 images (258 subjects) and 89 (66 subjects) were annotated for training and testing, respectively. Transfer learning was applied to two YOLO-based models, with input training images augmented with random horizontal flipping, photo-metric colour distortion, translation and scaling during each training epoch. Additionally, the re-orientation of input images and fusion of trained deep learning models was explored. Our proposed model based on YOLOv7Face outperformed existing methods with a mean average precision of 84.8% for face detection, and a normalised mean error of 0.072 for facial landmark detection. Overall, this will assist in the development of fully automated neonatal health assessment algorithms.Clinical relevance- Accurate face and facial landmark detection provides an automated and non-contact option to assist in video-based neonatal health applications.
Collapse
|
9
|
Huang X, Shirahama K, Irshad MT, Nisar MA, Piet A, Grzegorzek M. Sleep Stage Classification in Children Using Self-Attention and Gaussian Noise Data Augmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:3446. [PMID: 37050506 PMCID: PMC10098613 DOI: 10.3390/s23073446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/20/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
The analysis of sleep stages for children plays an important role in early diagnosis and treatment. This paper introduces our sleep stage classification method addressing the following two challenges: the first is the data imbalance problem, i.e., the highly skewed class distribution with underrepresented minority classes. For this, a Gaussian Noise Data Augmentation (GNDA) algorithm was applied to polysomnography recordings to seek the balance of data sizes for different sleep stages. The second challenge is the difficulty in identifying a minority class of sleep stages, given their short sleep duration and similarities to other stages in terms of EEG characteristics. To overcome this, we developed a DeConvolution- and Self-Attention-based Model (DCSAM) which can inverse the feature map of a hidden layer to the input space to extract local features and extract the correlations between all possible pairs of features to distinguish sleep stages. The results on our dataset show that DCSAM based on GNDA obtains an accuracy of 90.26% and a macro F1-score of 86.51% which are higher than those of our previous method. We also tested DCSAM on a well-known public dataset-Sleep-EDFX-to prove whether it is applicable to sleep data from adults. It achieves a comparable performance to state-of-the-art methods, especially accuracies of 91.77%, 92.54%, 94.73%, and 95.30% for six-stage, five-stage, four-stage, and three-stage classification, respectively. These results imply that our DCSAM based on GNDA has a great potential to offer performance improvements in various medical domains by considering the data imbalance problems and correlations among features in time series data.
Collapse
Affiliation(s)
- Xinyu Huang
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Kimiaki Shirahama
- Department of Informatics, Kindai University, 3-4-1 Kowakae, Higashiosaka City 577-8502, Osaka, Japan
| | - Muhammad Tausif Irshad
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
- Department of IT, University of the Punjab, Lahore 54000, Pakistan
| | | | - Artur Piet
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
- Department of Knowledge Engineering, University of Economics, Bogucicka 3, 40287 Katowice, Poland
| |
Collapse
|
10
|
Security Analysis of Social Network Topic Mining Using Big Data and Optimized Deep Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8045968. [PMID: 36188706 PMCID: PMC9525195 DOI: 10.1155/2022/8045968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 09/06/2022] [Indexed: 11/17/2022]
Abstract
This research aims to conduct topic mining and data analysis of social network security using social network big data. At present, the main problem is that users’ behavior on social networks may reveal their private data. The main contribution lies in the establishment of a network security topic detection model combining Convolutional Neural Network (CNN) and social network big data technology. Deep Convolution Neural Network (DCNN) is utilized to complete the analysis and search of social network security issues. The Long Short-Term Memory (LSTM) algorithm is used for the extraction of Weibo topic information in the memory wisdom. Experimental results show that the recognition accuracy of the constructed model can reach 96.17% after 120 iterations, which is at least 5.4% higher than other models. Additionally, the accuracy, recall, and F1 value of the intrusion detection model are 88.57%, 75.22%, and 72.05%, respectively. Compared with other algorithms, the model’s accuracy, recall, and F1 value are at least 3.1% higher than other models. In addition, the training time and testing time of the improved DCNN network security detection model are stabilized at 65.86 s and 27.90 s, respectively. The prediction time of the improved DCNN network security detection model is significantly shortened compared with that of the models proposed by other scholars. The experimental conclusion is that the improved DCNN has the characteristics of lower delay under deep learning. The model shows good performance for network data security transmission.
Collapse
|
11
|
Yubo Z, Yingying L, Bing Z, Lin Z, Lei L. MMASleepNet: A multimodal attention network based on electrophysiological signals for automatic sleep staging. Front Neurosci 2022; 16:973761. [PMID: 36051650 PMCID: PMC9424881 DOI: 10.3389/fnins.2022.973761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 07/22/2022] [Indexed: 11/13/2022] Open
Abstract
Pandemic-related sleep disorders affect human physical and mental health. The artificial intelligence (AI) based sleep staging with multimodal electrophysiological signals help people diagnose and treat sleep disorders. However, the existing AI-based methods could not capture more discriminative modalities and adaptively correlate these multimodal features. This paper introduces a multimodal attention network (MMASleepNet) to efficiently extract, perceive and fuse multimodal features of electrophysiological signals. The MMASleepNet has a multi-branch feature extraction (MBFE) module followed by an attention-based feature fusing (AFF) module. In the MBFE module, branches are designed to extract multimodal signals' temporal and spectral features. Each branch has two-stream convolutional networks with a unique kernel to perceive features of different time scales. The AFF module contains a modal-wise squeeze and excitation (SE) block to adjust the weights of modalities with more discriminative features and a Transformer encoder (TE) to generate attention matrices and extract the inter-dependencies among multimodal features. Our MMASleepNet outperforms state-of-the-art models in terms of different evaluation matrices on the datasets of Sleep-EDF and ISRUC-Sleep. The implementation code is available at: https://github.com/buptantEEG/MMASleepNet/.
Collapse
|
12
|
Wang P, Zhou Y, Li Z, Huang S, Zhang D. Neural Decoding of Chinese Sign Language With Machine Learning for Brain-Computer Interfaces. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2721-2732. [PMID: 34932480 DOI: 10.1109/tnsre.2021.3137340] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Limb motion decoding is an important part of brain-computer interface (BCI) research. Among the limb motion, sign language not only contains rich semantic information and abundant maneuverable actions but also provides different executable commands. However, many researchers focus on decoding the gross motor skills, such as the decoding of ordinary motor imagery or simple upper limb movements. Here we explored the neural features and decoding of Chinese sign language from electroencephalograph (EEG) signal with motor imagery and motor execution. Sign language not only contains rich semantic information, but also has abundant maneuverable actions, and provides us with more different executable commands. In this paper, twenty subjects were instructed to perform movement execution and movement imagery based on Chinese sign language. Seven classifiers are employed to classify the selected features of sign language EEG. L1 regularization is used to learn and select features that contain more information from the mean, power spectral density, sample entropy, and brain network connectivity. The best average classification accuracy of the classifier is 89.90% (imagery sign language is 83.40%). These results have shown the feasibility of decoding between different sign languages. The source location reveals that the neural circuits involved in sign language are related to the visual contact area and the pre-movement area. Experimental evaluation shows that the proposed decoding strategy based on sign language can obtain outstanding classification results, which provides a certain reference value for the subsequent research of limb decoding based on sign language.
Collapse
|